00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2466 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.166 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.415 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.427 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.439 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.439 > git config core.sparsecheckout # timeout=10 00:00:07.450 > git read-tree -mu HEAD # timeout=10 00:00:07.465 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.484 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.484 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.563 [Pipeline] Start of Pipeline 00:00:07.577 [Pipeline] library 00:00:07.578 Loading library shm_lib@master 00:00:07.579 Library shm_lib@master is cached. Copying from home. 00:00:07.593 [Pipeline] node 00:00:07.602 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.604 [Pipeline] { 00:00:07.612 [Pipeline] catchError 00:00:07.613 [Pipeline] { 00:00:07.622 [Pipeline] wrap 00:00:07.629 [Pipeline] { 00:00:07.634 [Pipeline] stage 00:00:07.635 [Pipeline] { (Prologue) 00:00:07.653 [Pipeline] echo 00:00:07.654 Node: VM-host-SM38 00:00:07.660 [Pipeline] cleanWs 00:00:07.670 [WS-CLEANUP] Deleting project workspace... 00:00:07.670 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.677 [WS-CLEANUP] done 00:00:07.874 [Pipeline] setCustomBuildProperty 00:00:07.934 [Pipeline] httpRequest 00:00:08.619 [Pipeline] echo 00:00:08.621 Sorcerer 10.211.164.20 is alive 00:00:08.631 [Pipeline] retry 00:00:08.633 [Pipeline] { 00:00:08.647 [Pipeline] httpRequest 00:00:08.652 HttpMethod: GET 00:00:08.652 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.653 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.654 Response Code: HTTP/1.1 200 OK 00:00:08.655 Success: Status code 200 is in the accepted range: 200,404 00:00:08.656 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.784 [Pipeline] } 00:00:09.802 [Pipeline] // retry 00:00:09.810 [Pipeline] sh 00:00:10.096 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.112 [Pipeline] httpRequest 00:00:10.430 [Pipeline] echo 00:00:10.432 Sorcerer 10.211.164.20 is alive 00:00:10.441 [Pipeline] retry 00:00:10.443 [Pipeline] { 00:00:10.457 [Pipeline] httpRequest 00:00:10.462 HttpMethod: GET 00:00:10.463 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.465 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.483 Response Code: HTTP/1.1 200 OK 00:00:10.483 Success: Status code 200 is in the accepted range: 200,404 00:00:10.484 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.201 [Pipeline] } 00:00:55.219 [Pipeline] // retry 00:00:55.227 [Pipeline] sh 00:00:55.518 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:58.070 [Pipeline] sh 00:00:58.356 + git -C spdk log --oneline -n5 00:00:58.356 c13c99a5e test: Various fixes for Fedora40 00:00:58.356 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:58.356 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:58.356 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:58.356 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:58.375 [Pipeline] writeFile 00:00:58.390 [Pipeline] sh 00:00:58.708 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.721 [Pipeline] sh 00:00:59.006 + cat autorun-spdk.conf 00:00:59.006 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.006 SPDK_TEST_NVME=1 00:00:59.006 SPDK_TEST_FTL=1 00:00:59.006 SPDK_TEST_ISAL=1 00:00:59.006 SPDK_RUN_ASAN=1 00:00:59.006 SPDK_RUN_UBSAN=1 00:00:59.006 SPDK_TEST_XNVME=1 00:00:59.006 SPDK_TEST_NVME_FDP=1 00:00:59.006 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.015 RUN_NIGHTLY=1 00:00:59.017 [Pipeline] } 00:00:59.030 [Pipeline] // stage 00:00:59.043 [Pipeline] stage 00:00:59.045 [Pipeline] { (Run VM) 00:00:59.058 [Pipeline] sh 00:00:59.342 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:59.342 + echo 'Start stage prepare_nvme.sh' 00:00:59.342 Start stage prepare_nvme.sh 00:00:59.342 + [[ -n 2 ]] 00:00:59.342 + disk_prefix=ex2 00:00:59.342 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:59.342 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:59.342 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:59.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.342 ++ SPDK_TEST_NVME=1 00:00:59.342 ++ SPDK_TEST_FTL=1 00:00:59.342 ++ SPDK_TEST_ISAL=1 00:00:59.342 ++ SPDK_RUN_ASAN=1 00:00:59.342 ++ SPDK_RUN_UBSAN=1 00:00:59.342 ++ SPDK_TEST_XNVME=1 00:00:59.342 ++ SPDK_TEST_NVME_FDP=1 00:00:59.342 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.342 ++ RUN_NIGHTLY=1 00:00:59.342 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:59.342 + nvme_files=() 00:00:59.342 + declare -A nvme_files 00:00:59.342 + backend_dir=/var/lib/libvirt/images/backends 00:00:59.342 + nvme_files['nvme.img']=5G 00:00:59.342 + nvme_files['nvme-cmb.img']=5G 00:00:59.342 + nvme_files['nvme-multi0.img']=4G 00:00:59.342 + nvme_files['nvme-multi1.img']=4G 00:00:59.342 + nvme_files['nvme-multi2.img']=4G 00:00:59.342 + nvme_files['nvme-openstack.img']=8G 00:00:59.342 + nvme_files['nvme-zns.img']=5G 00:00:59.342 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:59.342 + (( SPDK_TEST_FTL == 1 )) 00:00:59.342 + nvme_files["nvme-ftl.img"]=6G 00:00:59.342 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:59.342 + nvme_files["nvme-fdp.img"]=1G 00:00:59.342 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:59.342 + for nvme in "${!nvme_files[@]}" 00:00:59.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:59.342 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.342 + for nvme in "${!nvme_files[@]}" 00:00:59.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:00.287 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:00.287 + for nvme in "${!nvme_files[@]}" 00:01:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:00.287 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.287 + for nvme in "${!nvme_files[@]}" 00:01:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:00.287 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.287 + for nvme in "${!nvme_files[@]}" 00:01:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:00.287 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.287 + for nvme in "${!nvme_files[@]}" 00:01:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:00.549 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.549 + for nvme in "${!nvme_files[@]}" 00:01:00.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:01.495 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.495 + for nvme in "${!nvme_files[@]}" 00:01:01.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:01.757 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:01.757 + for nvme in "${!nvme_files[@]}" 00:01:01.757 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:02.702 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.702 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:02.702 + echo 'End stage prepare_nvme.sh' 00:01:02.702 End stage prepare_nvme.sh 00:01:02.715 [Pipeline] sh 00:01:03.000 + DISTRO=fedora39 00:01:03.000 + CPUS=10 00:01:03.000 + RAM=12288 00:01:03.000 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:03.000 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:03.000 00:01:03.000 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:03.000 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:03.000 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:03.001 HELP=0 00:01:03.001 DRY_RUN=0 00:01:03.001 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:03.001 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:03.001 NVME_AUTO_CREATE=0 00:01:03.001 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:03.001 NVME_CMB=,,,, 00:01:03.001 NVME_PMR=,,,, 00:01:03.001 NVME_ZNS=,,,, 00:01:03.001 NVME_MS=true,,,, 00:01:03.001 NVME_FDP=,,,on, 00:01:03.001 SPDK_VAGRANT_DISTRO=fedora39 00:01:03.001 SPDK_VAGRANT_VMCPU=10 00:01:03.001 SPDK_VAGRANT_VMRAM=12288 00:01:03.001 SPDK_VAGRANT_PROVIDER=libvirt 00:01:03.001 SPDK_VAGRANT_HTTP_PROXY= 00:01:03.001 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:03.001 SPDK_OPENSTACK_NETWORK=0 00:01:03.001 VAGRANT_PACKAGE_BOX=0 00:01:03.001 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:03.001 FORCE_DISTRO=true 00:01:03.001 VAGRANT_BOX_VERSION= 00:01:03.001 EXTRA_VAGRANTFILES= 00:01:03.001 NIC_MODEL=e1000 00:01:03.001 00:01:03.001 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:03.001 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:05.549 Bringing machine 'default' up with 'libvirt' provider... 00:01:05.811 ==> default: Creating image (snapshot of base box volume). 00:01:05.811 ==> default: Creating domain with the following settings... 00:01:05.811 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734255534_fd1a84c386267f39a165 00:01:05.811 ==> default: -- Domain type: kvm 00:01:05.811 ==> default: -- Cpus: 10 00:01:05.811 ==> default: -- Feature: acpi 00:01:05.811 ==> default: -- Feature: apic 00:01:05.811 ==> default: -- Feature: pae 00:01:05.811 ==> default: -- Memory: 12288M 00:01:05.811 ==> default: -- Memory Backing: hugepages: 00:01:05.811 ==> default: -- Management MAC: 00:01:05.811 ==> default: -- Loader: 00:01:05.811 ==> default: -- Nvram: 00:01:05.811 ==> default: -- Base box: spdk/fedora39 00:01:05.811 ==> default: -- Storage pool: default 00:01:05.811 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734255534_fd1a84c386267f39a165.img (20G) 00:01:05.811 ==> default: -- Volume Cache: default 00:01:05.811 ==> default: -- Kernel: 00:01:05.811 ==> default: -- Initrd: 00:01:05.811 ==> default: -- Graphics Type: vnc 00:01:05.811 ==> default: -- Graphics Port: -1 00:01:05.811 ==> default: -- Graphics IP: 127.0.0.1 00:01:05.811 ==> default: -- Graphics Password: Not defined 00:01:05.811 ==> default: -- Video Type: cirrus 00:01:05.811 ==> default: -- Video VRAM: 9216 00:01:05.811 ==> default: -- Sound Type: 00:01:05.811 ==> default: -- Keymap: en-us 00:01:05.811 ==> default: -- TPM Path: 00:01:05.811 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:05.811 ==> default: -- Command line args: 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:05.811 ==> default: -> value=-drive, 00:01:05.811 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:05.811 ==> default: -> value=-device, 00:01:05.811 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.072 ==> default: Creating shared folders metadata... 00:01:06.072 ==> default: Starting domain. 00:01:07.457 ==> default: Waiting for domain to get an IP address... 00:01:25.587 ==> default: Waiting for SSH to become available... 00:01:25.587 ==> default: Configuring and enabling network interfaces... 00:01:29.795 default: SSH address: 192.168.121.63:22 00:01:29.795 default: SSH username: vagrant 00:01:29.795 default: SSH auth method: private key 00:01:31.704 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:39.866 ==> default: Mounting SSHFS shared folder... 00:01:41.780 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.780 ==> default: Checking Mount.. 00:01:43.167 ==> default: Folder Successfully Mounted! 00:01:43.167 00:01:43.167 SUCCESS! 00:01:43.167 00:01:43.167 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:43.167 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.167 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:43.167 00:01:43.177 [Pipeline] } 00:01:43.192 [Pipeline] // stage 00:01:43.201 [Pipeline] dir 00:01:43.202 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:43.203 [Pipeline] { 00:01:43.216 [Pipeline] catchError 00:01:43.218 [Pipeline] { 00:01:43.230 [Pipeline] sh 00:01:43.514 + sed -ne '/^Host/,$p' 00:01:43.514 + vagrant ssh-config --host vagrant 00:01:43.514 + tee ssh_conf 00:01:46.062 Host vagrant 00:01:46.062 HostName 192.168.121.63 00:01:46.062 User vagrant 00:01:46.062 Port 22 00:01:46.062 UserKnownHostsFile /dev/null 00:01:46.062 StrictHostKeyChecking no 00:01:46.062 PasswordAuthentication no 00:01:46.062 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:46.062 IdentitiesOnly yes 00:01:46.062 LogLevel FATAL 00:01:46.062 ForwardAgent yes 00:01:46.062 ForwardX11 yes 00:01:46.062 00:01:46.076 [Pipeline] withEnv 00:01:46.079 [Pipeline] { 00:01:46.091 [Pipeline] sh 00:01:46.374 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:46.374 source /etc/os-release 00:01:46.374 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.374 # Minimal, systemd-like check. 00:01:46.374 if [[ -e /.dockerenv ]]; then 00:01:46.374 # Clear garbage from the node'\''s name: 00:01:46.374 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.374 # $HOSTNAME is the actual container id 00:01:46.374 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.374 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.374 # We can assume this is a mount from a host where container is running, 00:01:46.374 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.374 container="$(< /etc/hostname) ($agent)" 00:01:46.374 else 00:01:46.374 # Fallback 00:01:46.374 container=$agent 00:01:46.374 fi 00:01:46.374 fi 00:01:46.374 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.374 ' 00:01:46.647 [Pipeline] } 00:01:46.663 [Pipeline] // withEnv 00:01:46.672 [Pipeline] setCustomBuildProperty 00:01:46.686 [Pipeline] stage 00:01:46.688 [Pipeline] { (Tests) 00:01:46.704 [Pipeline] sh 00:01:46.989 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.265 [Pipeline] sh 00:01:47.551 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:47.829 [Pipeline] timeout 00:01:47.829 Timeout set to expire in 50 min 00:01:47.831 [Pipeline] { 00:01:47.844 [Pipeline] sh 00:01:48.129 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:48.702 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:48.715 [Pipeline] sh 00:01:49.000 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:49.276 [Pipeline] sh 00:01:49.561 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:49.855 [Pipeline] sh 00:01:50.139 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:50.400 ++ readlink -f spdk_repo 00:01:50.400 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.400 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.400 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.400 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.400 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.400 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.400 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.400 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:50.400 + cd /home/vagrant/spdk_repo 00:01:50.400 + source /etc/os-release 00:01:50.400 ++ NAME='Fedora Linux' 00:01:50.400 ++ VERSION='39 (Cloud Edition)' 00:01:50.400 ++ ID=fedora 00:01:50.400 ++ VERSION_ID=39 00:01:50.400 ++ VERSION_CODENAME= 00:01:50.400 ++ PLATFORM_ID=platform:f39 00:01:50.400 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.400 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.400 ++ LOGO=fedora-logo-icon 00:01:50.400 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.400 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.400 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.400 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.400 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.400 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.400 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.400 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.400 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.400 ++ SUPPORT_END=2024-11-12 00:01:50.400 ++ VARIANT='Cloud Edition' 00:01:50.400 ++ VARIANT_ID=cloud 00:01:50.400 + uname -a 00:01:50.400 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.400 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:50.400 Hugepages 00:01:50.400 node hugesize free / total 00:01:50.400 node0 1048576kB 0 / 0 00:01:50.400 node0 2048kB 0 / 0 00:01:50.400 00:01:50.401 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.401 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:50.401 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:50.662 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:50.662 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:50.662 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:50.662 + rm -f /tmp/spdk-ld-path 00:01:50.662 + source autorun-spdk.conf 00:01:50.662 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.662 ++ SPDK_TEST_NVME=1 00:01:50.662 ++ SPDK_TEST_FTL=1 00:01:50.662 ++ SPDK_TEST_ISAL=1 00:01:50.662 ++ SPDK_RUN_ASAN=1 00:01:50.662 ++ SPDK_RUN_UBSAN=1 00:01:50.662 ++ SPDK_TEST_XNVME=1 00:01:50.662 ++ SPDK_TEST_NVME_FDP=1 00:01:50.662 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.662 ++ RUN_NIGHTLY=1 00:01:50.662 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.662 + [[ -n '' ]] 00:01:50.662 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:50.662 + for M in /var/spdk/build-*-manifest.txt 00:01:50.662 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:50.662 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.662 + for M in /var/spdk/build-*-manifest.txt 00:01:50.662 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.662 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.662 + for M in /var/spdk/build-*-manifest.txt 00:01:50.662 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.662 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.662 ++ uname 00:01:50.662 + [[ Linux == \L\i\n\u\x ]] 00:01:50.662 + sudo dmesg -T 00:01:50.662 + sudo dmesg --clear 00:01:50.662 + dmesg_pid=4990 00:01:50.662 + [[ Fedora Linux == FreeBSD ]] 00:01:50.662 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.662 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.662 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.662 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.662 + sudo dmesg -Tw 00:01:50.662 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.662 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.662 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.662 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.662 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.662 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.662 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.662 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.662 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.662 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.662 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.662 Test configuration: 00:01:50.662 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.662 SPDK_TEST_NVME=1 00:01:50.662 SPDK_TEST_FTL=1 00:01:50.662 SPDK_TEST_ISAL=1 00:01:50.662 SPDK_RUN_ASAN=1 00:01:50.662 SPDK_RUN_UBSAN=1 00:01:50.662 SPDK_TEST_XNVME=1 00:01:50.662 SPDK_TEST_NVME_FDP=1 00:01:50.662 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.924 RUN_NIGHTLY=1 09:39:39 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:50.925 09:39:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:50.925 09:39:39 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:50.925 09:39:39 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:50.925 09:39:39 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:50.925 09:39:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.925 09:39:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.925 09:39:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.925 09:39:39 -- paths/export.sh@5 -- $ export PATH 00:01:50.925 09:39:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.925 09:39:39 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:50.925 09:39:39 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:50.925 09:39:39 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734255579.XXXXXX 00:01:50.925 09:39:39 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734255579.UolN4M 00:01:50.925 09:39:39 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:50.925 09:39:39 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:50.925 09:39:39 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:50.925 09:39:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:50.925 09:39:39 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:50.925 09:39:39 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:50.925 09:39:39 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:50.925 09:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.925 09:39:39 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:50.925 09:39:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.925 09:39:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.925 09:39:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:50.925 09:39:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.925 Sun Dec 15 09:39:39 AM UTC 2024 00:01:50.925 09:39:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.925 LTS-67-gc13c99a5e 00:01:50.925 09:39:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:50.925 09:39:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:50.925 09:39:39 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:50.925 09:39:39 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:50.925 09:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.925 ************************************ 00:01:50.925 START TEST asan 00:01:50.925 ************************************ 00:01:50.925 using asan 00:01:50.925 09:39:39 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:50.925 00:01:50.925 real 0m0.000s 00:01:50.925 user 0m0.000s 00:01:50.925 sys 0m0.000s 00:01:50.925 09:39:39 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:50.925 ************************************ 00:01:50.925 END TEST asan 00:01:50.925 ************************************ 00:01:50.925 09:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.925 09:39:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.925 09:39:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.925 09:39:39 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:50.925 09:39:39 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:50.925 09:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.925 ************************************ 00:01:50.925 START TEST ubsan 00:01:50.925 ************************************ 00:01:50.925 using ubsan 00:01:50.925 09:39:39 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:50.925 00:01:50.925 real 0m0.000s 00:01:50.925 user 0m0.000s 00:01:50.925 sys 0m0.000s 00:01:50.925 ************************************ 00:01:50.925 END TEST ubsan 00:01:50.925 ************************************ 00:01:50.925 09:39:39 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:50.925 09:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.925 09:39:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.925 09:39:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.925 09:39:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.925 09:39:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:51.186 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:51.186 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.447 Using 'verbs' RDMA provider 00:02:04.630 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:14.636 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:14.636 Creating mk/config.mk...done. 00:02:14.636 Creating mk/cc.flags.mk...done. 00:02:14.636 Type 'make' to build. 00:02:14.636 09:40:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:14.636 09:40:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:14.636 09:40:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:14.636 09:40:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.636 ************************************ 00:02:14.636 START TEST make 00:02:14.636 ************************************ 00:02:14.636 09:40:03 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:14.636 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:14.636 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:14.636 meson setup builddir \ 00:02:14.636 -Dwith-libaio=enabled \ 00:02:14.636 -Dwith-liburing=enabled \ 00:02:14.636 -Dwith-libvfn=disabled \ 00:02:14.636 -Dwith-spdk=false && \ 00:02:14.636 meson compile -C builddir && \ 00:02:14.636 cd -) 00:02:14.636 make[1]: Nothing to be done for 'all'. 00:02:17.178 The Meson build system 00:02:17.178 Version: 1.5.0 00:02:17.178 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:17.178 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:17.178 Build type: native build 00:02:17.178 Project name: xnvme 00:02:17.178 Project version: 0.7.3 00:02:17.178 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.178 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.178 Host machine cpu family: x86_64 00:02:17.178 Host machine cpu: x86_64 00:02:17.178 Message: host_machine.system: linux 00:02:17.178 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:17.178 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:17.178 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.179 Run-time dependency threads found: YES 00:02:17.179 Has header "setupapi.h" : NO 00:02:17.179 Has header "linux/blkzoned.h" : YES 00:02:17.179 Has header "linux/blkzoned.h" : YES (cached) 00:02:17.179 Has header "libaio.h" : YES 00:02:17.179 Library aio found: YES 00:02:17.179 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.179 Run-time dependency liburing found: YES 2.2 00:02:17.179 Dependency libvfn skipped: feature with-libvfn disabled 00:02:17.179 Run-time dependency appleframeworks found: NO (tried framework) 00:02:17.179 Run-time dependency appleframeworks found: NO (tried framework) 00:02:17.179 Configuring xnvme_config.h using configuration 00:02:17.179 Configuring xnvme.spec using configuration 00:02:17.179 Run-time dependency bash-completion found: YES 2.11 00:02:17.179 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:17.179 Program cp found: YES (/usr/bin/cp) 00:02:17.179 Has header "winsock2.h" : NO 00:02:17.179 Has header "dbghelp.h" : NO 00:02:17.179 Library rpcrt4 found: NO 00:02:17.179 Library rt found: YES 00:02:17.179 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:17.179 Found CMake: /usr/bin/cmake (3.27.7) 00:02:17.179 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:17.179 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:17.179 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:17.179 Build targets in project: 32 00:02:17.179 00:02:17.179 xnvme 0.7.3 00:02:17.179 00:02:17.179 User defined options 00:02:17.179 with-libaio : enabled 00:02:17.179 with-liburing: enabled 00:02:17.179 with-libvfn : disabled 00:02:17.179 with-spdk : false 00:02:17.179 00:02:17.179 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.179 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:17.179 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:17.179 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:17.179 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:17.179 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:17.179 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:17.179 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:17.179 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:17.179 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:17.179 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:17.179 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:17.179 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:17.179 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:17.440 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:17.440 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:17.440 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:17.440 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:17.440 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:17.440 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:17.440 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:17.440 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:17.440 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:17.440 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:17.440 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:17.440 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:17.440 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:17.440 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:17.440 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:17.440 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:17.440 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:17.440 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:17.440 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:17.440 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:17.440 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:17.440 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:17.440 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:17.440 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:17.440 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:17.440 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:17.440 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:17.440 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:17.440 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:17.440 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:17.440 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:17.440 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:17.440 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:17.440 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:17.440 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:17.440 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:17.440 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:17.440 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:17.440 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:17.701 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:17.701 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:17.701 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:17.701 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:17.701 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:17.701 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:17.701 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:17.701 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:17.701 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:17.701 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:17.701 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:17.701 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:17.701 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:17.701 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:17.701 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:17.701 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:17.701 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:17.701 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:17.701 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:17.701 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:17.701 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:17.701 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:17.701 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:17.701 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:17.962 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:17.962 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:17.962 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:17.962 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:17.962 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:17.962 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:17.962 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:17.962 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:17.962 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:17.962 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:17.962 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:17.962 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:17.962 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:17.962 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:17.962 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:17.962 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:17.962 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:17.962 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:17.962 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:17.962 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:17.962 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:17.962 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:17.962 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:18.226 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:18.226 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:18.226 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:18.226 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:18.226 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:18.226 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:18.226 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:18.226 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:18.226 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:18.226 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:18.226 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:18.226 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:18.226 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:18.226 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:18.226 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:18.226 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:18.226 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:18.226 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:18.226 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:18.226 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:18.226 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:18.226 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:18.226 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:18.226 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:18.226 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:18.226 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:18.226 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:18.226 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:18.226 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:18.226 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:18.226 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:18.226 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:18.226 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:18.226 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:18.488 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:18.488 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:18.488 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:18.488 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:18.488 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:18.488 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:18.488 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:18.488 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:18.488 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:18.488 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:18.488 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:18.488 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:18.488 [145/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:18.488 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:18.488 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:18.488 [148/203] Linking target lib/libxnvme.so 00:02:18.488 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:18.488 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:18.488 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:18.488 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:18.488 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:18.747 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:18.747 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:18.747 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:18.747 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:18.747 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:18.747 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:18.747 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:18.747 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:18.747 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:18.747 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:18.747 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:18.747 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:18.747 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:18.747 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:18.747 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:18.747 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:18.747 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:19.005 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:19.005 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:19.005 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:19.005 [174/203] Linking static target lib/libxnvme.a 00:02:19.005 [175/203] Linking target tests/xnvme_tests_cli 00:02:19.005 [176/203] Linking target tests/xnvme_tests_buf 00:02:19.005 [177/203] Linking target tests/xnvme_tests_async_intf 00:02:19.005 [178/203] Linking target tests/xnvme_tests_enum 00:02:19.005 [179/203] Linking target tests/xnvme_tests_ioworker 00:02:19.005 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:02:19.005 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:19.005 [182/203] Linking target tests/xnvme_tests_lblk 00:02:19.005 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:19.005 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:19.005 [185/203] Linking target tests/xnvme_tests_scc 00:02:19.005 [186/203] Linking target tests/xnvme_tests_map 00:02:19.005 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:19.005 [188/203] Linking target tools/lblk 00:02:19.005 [189/203] Linking target tools/zoned 00:02:19.005 [190/203] Linking target tools/xnvme 00:02:19.005 [191/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:19.005 [192/203] Linking target tests/xnvme_tests_kvs 00:02:19.005 [193/203] Linking target examples/xnvme_dev 00:02:19.005 [194/203] Linking target tools/kvs 00:02:19.005 [195/203] Linking target tools/xdd 00:02:19.005 [196/203] Linking target examples/xnvme_enum 00:02:19.005 [197/203] Linking target tools/xnvme_file 00:02:19.005 [198/203] Linking target examples/xnvme_io_async 00:02:19.005 [199/203] Linking target examples/xnvme_single_sync 00:02:19.005 [200/203] Linking target examples/zoned_io_sync 00:02:19.005 [201/203] Linking target examples/xnvme_single_async 00:02:19.005 [202/203] Linking target examples/xnvme_hello 00:02:19.005 [203/203] Linking target examples/zoned_io_async 00:02:19.005 INFO: autodetecting backend as ninja 00:02:19.005 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:19.005 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:24.276 The Meson build system 00:02:24.276 Version: 1.5.0 00:02:24.276 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:24.276 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:24.276 Build type: native build 00:02:24.276 Program cat found: YES (/usr/bin/cat) 00:02:24.276 Project name: DPDK 00:02:24.276 Project version: 23.11.0 00:02:24.276 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.276 C linker for the host machine: cc ld.bfd 2.40-14 00:02:24.276 Host machine cpu family: x86_64 00:02:24.276 Host machine cpu: x86_64 00:02:24.276 Message: ## Building in Developer Mode ## 00:02:24.276 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.276 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.276 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.276 Program python3 found: YES (/usr/bin/python3) 00:02:24.276 Program cat found: YES (/usr/bin/cat) 00:02:24.276 Compiler for C supports arguments -march=native: YES 00:02:24.276 Checking for size of "void *" : 8 00:02:24.276 Checking for size of "void *" : 8 (cached) 00:02:24.276 Library m found: YES 00:02:24.276 Library numa found: YES 00:02:24.276 Has header "numaif.h" : YES 00:02:24.276 Library fdt found: NO 00:02:24.276 Library execinfo found: NO 00:02:24.276 Has header "execinfo.h" : YES 00:02:24.276 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.276 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.276 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.276 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.276 Run-time dependency openssl found: YES 3.1.1 00:02:24.276 Run-time dependency libpcap found: YES 1.10.4 00:02:24.276 Has header "pcap.h" with dependency libpcap: YES 00:02:24.276 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.276 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.276 Compiler for C supports arguments -Wformat: YES 00:02:24.276 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:24.276 Compiler for C supports arguments -Wformat-security: NO 00:02:24.276 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.276 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.276 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.276 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.276 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.276 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.276 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.276 Compiler for C supports arguments -Wundef: YES 00:02:24.276 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.276 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.276 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.276 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.276 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:24.276 Program objdump found: YES (/usr/bin/objdump) 00:02:24.276 Compiler for C supports arguments -mavx512f: YES 00:02:24.276 Checking if "AVX512 checking" compiles: YES 00:02:24.276 Fetching value of define "__SSE4_2__" : 1 00:02:24.276 Fetching value of define "__AES__" : 1 00:02:24.276 Fetching value of define "__AVX__" : 1 00:02:24.276 Fetching value of define "__AVX2__" : 1 00:02:24.276 Fetching value of define "__AVX512BW__" : 1 00:02:24.276 Fetching value of define "__AVX512CD__" : 1 00:02:24.276 Fetching value of define "__AVX512DQ__" : 1 00:02:24.276 Fetching value of define "__AVX512F__" : 1 00:02:24.276 Fetching value of define "__AVX512VL__" : 1 00:02:24.276 Fetching value of define "__PCLMUL__" : 1 00:02:24.276 Fetching value of define "__RDRND__" : 1 00:02:24.276 Fetching value of define "__RDSEED__" : 1 00:02:24.276 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:24.276 Fetching value of define "__znver1__" : (undefined) 00:02:24.276 Fetching value of define "__znver2__" : (undefined) 00:02:24.276 Fetching value of define "__znver3__" : (undefined) 00:02:24.276 Fetching value of define "__znver4__" : (undefined) 00:02:24.276 Library asan found: YES 00:02:24.276 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.276 Message: lib/log: Defining dependency "log" 00:02:24.276 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.276 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.276 Library rt found: YES 00:02:24.276 Checking for function "getentropy" : NO 00:02:24.276 Message: lib/eal: Defining dependency "eal" 00:02:24.276 Message: lib/ring: Defining dependency "ring" 00:02:24.276 Message: lib/rcu: Defining dependency "rcu" 00:02:24.276 Message: lib/mempool: Defining dependency "mempool" 00:02:24.276 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.276 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.276 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.276 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.276 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:24.276 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:24.276 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:24.276 Compiler for C supports arguments -mpclmul: YES 00:02:24.276 Compiler for C supports arguments -maes: YES 00:02:24.276 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.276 Compiler for C supports arguments -mavx512bw: YES 00:02:24.276 Compiler for C supports arguments -mavx512dq: YES 00:02:24.276 Compiler for C supports arguments -mavx512vl: YES 00:02:24.276 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.276 Compiler for C supports arguments -mavx2: YES 00:02:24.276 Compiler for C supports arguments -mavx: YES 00:02:24.276 Message: lib/net: Defining dependency "net" 00:02:24.276 Message: lib/meter: Defining dependency "meter" 00:02:24.276 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.276 Message: lib/pci: Defining dependency "pci" 00:02:24.276 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.276 Message: lib/hash: Defining dependency "hash" 00:02:24.276 Message: lib/timer: Defining dependency "timer" 00:02:24.276 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.276 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.276 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.276 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.276 Message: lib/power: Defining dependency "power" 00:02:24.276 Message: lib/reorder: Defining dependency "reorder" 00:02:24.276 Message: lib/security: Defining dependency "security" 00:02:24.276 Has header "linux/userfaultfd.h" : YES 00:02:24.276 Has header "linux/vduse.h" : YES 00:02:24.276 Message: lib/vhost: Defining dependency "vhost" 00:02:24.276 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.276 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.276 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.276 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.276 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.276 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.276 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.276 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.276 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.276 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.276 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:24.276 Configuring doxy-api-html.conf using configuration 00:02:24.276 Configuring doxy-api-man.conf using configuration 00:02:24.276 Program mandb found: YES (/usr/bin/mandb) 00:02:24.276 Program sphinx-build found: NO 00:02:24.276 Configuring rte_build_config.h using configuration 00:02:24.276 Message: 00:02:24.276 ================= 00:02:24.276 Applications Enabled 00:02:24.276 ================= 00:02:24.276 00:02:24.276 apps: 00:02:24.276 00:02:24.276 00:02:24.276 Message: 00:02:24.276 ================= 00:02:24.276 Libraries Enabled 00:02:24.276 ================= 00:02:24.276 00:02:24.276 libs: 00:02:24.276 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.276 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.276 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.276 00:02:24.276 Message: 00:02:24.276 =============== 00:02:24.276 Drivers Enabled 00:02:24.276 =============== 00:02:24.276 00:02:24.276 common: 00:02:24.276 00:02:24.276 bus: 00:02:24.276 pci, vdev, 00:02:24.276 mempool: 00:02:24.276 ring, 00:02:24.276 dma: 00:02:24.276 00:02:24.276 net: 00:02:24.276 00:02:24.276 crypto: 00:02:24.276 00:02:24.276 compress: 00:02:24.276 00:02:24.276 vdpa: 00:02:24.276 00:02:24.276 00:02:24.276 Message: 00:02:24.276 ================= 00:02:24.276 Content Skipped 00:02:24.276 ================= 00:02:24.276 00:02:24.276 apps: 00:02:24.276 dumpcap: explicitly disabled via build config 00:02:24.276 graph: explicitly disabled via build config 00:02:24.276 pdump: explicitly disabled via build config 00:02:24.276 proc-info: explicitly disabled via build config 00:02:24.276 test-acl: explicitly disabled via build config 00:02:24.276 test-bbdev: explicitly disabled via build config 00:02:24.276 test-cmdline: explicitly disabled via build config 00:02:24.276 test-compress-perf: explicitly disabled via build config 00:02:24.276 test-crypto-perf: explicitly disabled via build config 00:02:24.276 test-dma-perf: explicitly disabled via build config 00:02:24.276 test-eventdev: explicitly disabled via build config 00:02:24.276 test-fib: explicitly disabled via build config 00:02:24.276 test-flow-perf: explicitly disabled via build config 00:02:24.276 test-gpudev: explicitly disabled via build config 00:02:24.276 test-mldev: explicitly disabled via build config 00:02:24.276 test-pipeline: explicitly disabled via build config 00:02:24.276 test-pmd: explicitly disabled via build config 00:02:24.276 test-regex: explicitly disabled via build config 00:02:24.276 test-sad: explicitly disabled via build config 00:02:24.277 test-security-perf: explicitly disabled via build config 00:02:24.277 00:02:24.277 libs: 00:02:24.277 metrics: explicitly disabled via build config 00:02:24.277 acl: explicitly disabled via build config 00:02:24.277 bbdev: explicitly disabled via build config 00:02:24.277 bitratestats: explicitly disabled via build config 00:02:24.277 bpf: explicitly disabled via build config 00:02:24.277 cfgfile: explicitly disabled via build config 00:02:24.277 distributor: explicitly disabled via build config 00:02:24.277 efd: explicitly disabled via build config 00:02:24.277 eventdev: explicitly disabled via build config 00:02:24.277 dispatcher: explicitly disabled via build config 00:02:24.277 gpudev: explicitly disabled via build config 00:02:24.277 gro: explicitly disabled via build config 00:02:24.277 gso: explicitly disabled via build config 00:02:24.277 ip_frag: explicitly disabled via build config 00:02:24.277 jobstats: explicitly disabled via build config 00:02:24.277 latencystats: explicitly disabled via build config 00:02:24.277 lpm: explicitly disabled via build config 00:02:24.277 member: explicitly disabled via build config 00:02:24.277 pcapng: explicitly disabled via build config 00:02:24.277 rawdev: explicitly disabled via build config 00:02:24.277 regexdev: explicitly disabled via build config 00:02:24.277 mldev: explicitly disabled via build config 00:02:24.277 rib: explicitly disabled via build config 00:02:24.277 sched: explicitly disabled via build config 00:02:24.277 stack: explicitly disabled via build config 00:02:24.277 ipsec: explicitly disabled via build config 00:02:24.277 pdcp: explicitly disabled via build config 00:02:24.277 fib: explicitly disabled via build config 00:02:24.277 port: explicitly disabled via build config 00:02:24.277 pdump: explicitly disabled via build config 00:02:24.277 table: explicitly disabled via build config 00:02:24.277 pipeline: explicitly disabled via build config 00:02:24.277 graph: explicitly disabled via build config 00:02:24.277 node: explicitly disabled via build config 00:02:24.277 00:02:24.277 drivers: 00:02:24.277 common/cpt: not in enabled drivers build config 00:02:24.277 common/dpaax: not in enabled drivers build config 00:02:24.277 common/iavf: not in enabled drivers build config 00:02:24.277 common/idpf: not in enabled drivers build config 00:02:24.277 common/mvep: not in enabled drivers build config 00:02:24.277 common/octeontx: not in enabled drivers build config 00:02:24.277 bus/auxiliary: not in enabled drivers build config 00:02:24.277 bus/cdx: not in enabled drivers build config 00:02:24.277 bus/dpaa: not in enabled drivers build config 00:02:24.277 bus/fslmc: not in enabled drivers build config 00:02:24.277 bus/ifpga: not in enabled drivers build config 00:02:24.277 bus/platform: not in enabled drivers build config 00:02:24.277 bus/vmbus: not in enabled drivers build config 00:02:24.277 common/cnxk: not in enabled drivers build config 00:02:24.277 common/mlx5: not in enabled drivers build config 00:02:24.277 common/nfp: not in enabled drivers build config 00:02:24.277 common/qat: not in enabled drivers build config 00:02:24.277 common/sfc_efx: not in enabled drivers build config 00:02:24.277 mempool/bucket: not in enabled drivers build config 00:02:24.277 mempool/cnxk: not in enabled drivers build config 00:02:24.277 mempool/dpaa: not in enabled drivers build config 00:02:24.277 mempool/dpaa2: not in enabled drivers build config 00:02:24.277 mempool/octeontx: not in enabled drivers build config 00:02:24.277 mempool/stack: not in enabled drivers build config 00:02:24.277 dma/cnxk: not in enabled drivers build config 00:02:24.277 dma/dpaa: not in enabled drivers build config 00:02:24.277 dma/dpaa2: not in enabled drivers build config 00:02:24.277 dma/hisilicon: not in enabled drivers build config 00:02:24.277 dma/idxd: not in enabled drivers build config 00:02:24.277 dma/ioat: not in enabled drivers build config 00:02:24.277 dma/skeleton: not in enabled drivers build config 00:02:24.277 net/af_packet: not in enabled drivers build config 00:02:24.277 net/af_xdp: not in enabled drivers build config 00:02:24.277 net/ark: not in enabled drivers build config 00:02:24.277 net/atlantic: not in enabled drivers build config 00:02:24.277 net/avp: not in enabled drivers build config 00:02:24.277 net/axgbe: not in enabled drivers build config 00:02:24.277 net/bnx2x: not in enabled drivers build config 00:02:24.277 net/bnxt: not in enabled drivers build config 00:02:24.277 net/bonding: not in enabled drivers build config 00:02:24.277 net/cnxk: not in enabled drivers build config 00:02:24.277 net/cpfl: not in enabled drivers build config 00:02:24.277 net/cxgbe: not in enabled drivers build config 00:02:24.277 net/dpaa: not in enabled drivers build config 00:02:24.277 net/dpaa2: not in enabled drivers build config 00:02:24.277 net/e1000: not in enabled drivers build config 00:02:24.277 net/ena: not in enabled drivers build config 00:02:24.277 net/enetc: not in enabled drivers build config 00:02:24.277 net/enetfec: not in enabled drivers build config 00:02:24.277 net/enic: not in enabled drivers build config 00:02:24.277 net/failsafe: not in enabled drivers build config 00:02:24.277 net/fm10k: not in enabled drivers build config 00:02:24.277 net/gve: not in enabled drivers build config 00:02:24.277 net/hinic: not in enabled drivers build config 00:02:24.277 net/hns3: not in enabled drivers build config 00:02:24.277 net/i40e: not in enabled drivers build config 00:02:24.277 net/iavf: not in enabled drivers build config 00:02:24.277 net/ice: not in enabled drivers build config 00:02:24.277 net/idpf: not in enabled drivers build config 00:02:24.277 net/igc: not in enabled drivers build config 00:02:24.277 net/ionic: not in enabled drivers build config 00:02:24.277 net/ipn3ke: not in enabled drivers build config 00:02:24.277 net/ixgbe: not in enabled drivers build config 00:02:24.277 net/mana: not in enabled drivers build config 00:02:24.277 net/memif: not in enabled drivers build config 00:02:24.277 net/mlx4: not in enabled drivers build config 00:02:24.277 net/mlx5: not in enabled drivers build config 00:02:24.277 net/mvneta: not in enabled drivers build config 00:02:24.277 net/mvpp2: not in enabled drivers build config 00:02:24.277 net/netvsc: not in enabled drivers build config 00:02:24.277 net/nfb: not in enabled drivers build config 00:02:24.277 net/nfp: not in enabled drivers build config 00:02:24.277 net/ngbe: not in enabled drivers build config 00:02:24.277 net/null: not in enabled drivers build config 00:02:24.277 net/octeontx: not in enabled drivers build config 00:02:24.277 net/octeon_ep: not in enabled drivers build config 00:02:24.277 net/pcap: not in enabled drivers build config 00:02:24.277 net/pfe: not in enabled drivers build config 00:02:24.277 net/qede: not in enabled drivers build config 00:02:24.277 net/ring: not in enabled drivers build config 00:02:24.277 net/sfc: not in enabled drivers build config 00:02:24.277 net/softnic: not in enabled drivers build config 00:02:24.277 net/tap: not in enabled drivers build config 00:02:24.277 net/thunderx: not in enabled drivers build config 00:02:24.277 net/txgbe: not in enabled drivers build config 00:02:24.277 net/vdev_netvsc: not in enabled drivers build config 00:02:24.277 net/vhost: not in enabled drivers build config 00:02:24.277 net/virtio: not in enabled drivers build config 00:02:24.277 net/vmxnet3: not in enabled drivers build config 00:02:24.277 raw/*: missing internal dependency, "rawdev" 00:02:24.277 crypto/armv8: not in enabled drivers build config 00:02:24.277 crypto/bcmfs: not in enabled drivers build config 00:02:24.277 crypto/caam_jr: not in enabled drivers build config 00:02:24.277 crypto/ccp: not in enabled drivers build config 00:02:24.277 crypto/cnxk: not in enabled drivers build config 00:02:24.277 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.277 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.277 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.277 crypto/mlx5: not in enabled drivers build config 00:02:24.277 crypto/mvsam: not in enabled drivers build config 00:02:24.277 crypto/nitrox: not in enabled drivers build config 00:02:24.277 crypto/null: not in enabled drivers build config 00:02:24.277 crypto/octeontx: not in enabled drivers build config 00:02:24.277 crypto/openssl: not in enabled drivers build config 00:02:24.277 crypto/scheduler: not in enabled drivers build config 00:02:24.277 crypto/uadk: not in enabled drivers build config 00:02:24.277 crypto/virtio: not in enabled drivers build config 00:02:24.277 compress/isal: not in enabled drivers build config 00:02:24.277 compress/mlx5: not in enabled drivers build config 00:02:24.277 compress/octeontx: not in enabled drivers build config 00:02:24.277 compress/zlib: not in enabled drivers build config 00:02:24.277 regex/*: missing internal dependency, "regexdev" 00:02:24.277 ml/*: missing internal dependency, "mldev" 00:02:24.277 vdpa/ifc: not in enabled drivers build config 00:02:24.277 vdpa/mlx5: not in enabled drivers build config 00:02:24.277 vdpa/nfp: not in enabled drivers build config 00:02:24.277 vdpa/sfc: not in enabled drivers build config 00:02:24.277 event/*: missing internal dependency, "eventdev" 00:02:24.277 baseband/*: missing internal dependency, "bbdev" 00:02:24.277 gpu/*: missing internal dependency, "gpudev" 00:02:24.277 00:02:24.277 00:02:24.277 Build targets in project: 84 00:02:24.277 00:02:24.277 DPDK 23.11.0 00:02:24.277 00:02:24.277 User defined options 00:02:24.277 buildtype : debug 00:02:24.277 default_library : shared 00:02:24.277 libdir : lib 00:02:24.277 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.277 b_sanitize : address 00:02:24.277 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:24.277 c_link_args : 00:02:24.277 cpu_instruction_set: native 00:02:24.277 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:24.277 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:24.277 enable_docs : false 00:02:24.277 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:24.277 enable_kmods : false 00:02:24.277 tests : false 00:02:24.277 00:02:24.277 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.536 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.536 [1/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.536 [2/264] Linking static target lib/librte_kvargs.a 00:02:24.536 [3/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.536 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.536 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.536 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.536 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.536 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.536 [9/264] Linking static target lib/librte_log.a 00:02:24.536 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.795 [11/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.795 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.795 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.795 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.053 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.053 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.053 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.053 [18/264] Linking static target lib/librte_telemetry.a 00:02:25.053 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.053 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.053 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.053 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.312 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.312 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.312 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.312 [26/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.312 [27/264] Linking target lib/librte_log.so.24.0 00:02:25.312 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.571 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.571 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.571 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.571 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.571 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.571 [34/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:25.571 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.571 [36/264] Linking target lib/librte_kvargs.so.24.0 00:02:25.571 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.830 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.830 [39/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.830 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.830 [41/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:25.830 [42/264] Linking target lib/librte_telemetry.so.24.0 00:02:25.830 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.830 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.830 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.830 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.088 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.088 [48/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:26.088 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.088 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.088 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.088 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.088 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.088 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.400 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.400 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.400 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.400 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.400 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.400 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.400 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.400 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.400 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.668 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.668 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.668 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.668 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.668 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.668 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.668 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.668 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.927 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.927 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.927 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.927 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.927 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.927 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.927 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.185 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.185 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.185 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.185 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.185 [83/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.185 [84/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.185 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.442 [86/264] Linking static target lib/librte_eal.a 00:02:27.442 [87/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.442 [88/264] Linking static target lib/librte_ring.a 00:02:27.442 [89/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.442 [90/264] Linking static target lib/librte_rcu.a 00:02:27.442 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.442 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.442 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.700 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.700 [95/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.700 [96/264] Linking static target lib/librte_mempool.a 00:02:27.700 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.958 [98/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.958 [99/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.958 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.958 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.958 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.958 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.217 [104/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.217 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:28.217 [106/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.217 [107/264] Linking static target lib/librte_net.a 00:02:28.217 [108/264] Linking static target lib/librte_meter.a 00:02:28.217 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.217 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.217 [111/264] Linking static target lib/librte_mbuf.a 00:02:28.476 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.476 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.476 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.476 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.476 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.734 [117/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.734 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.734 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.993 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.993 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.993 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.993 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.993 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.993 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.993 [126/264] Linking static target lib/librte_pci.a 00:02:28.993 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.251 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.251 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.251 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.251 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.251 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.251 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.251 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.251 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.251 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.251 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.251 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.251 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.510 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.510 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.510 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.510 [143/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.510 [144/264] Linking static target lib/librte_cmdline.a 00:02:29.768 [145/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.768 [146/264] Linking static target lib/librte_timer.a 00:02:29.768 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.768 [148/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.768 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.768 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.768 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.027 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.027 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.027 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.027 [155/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.027 [156/264] Linking static target lib/librte_compressdev.a 00:02:30.285 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.285 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.285 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.285 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.285 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.285 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.285 [163/264] Linking static target lib/librte_hash.a 00:02:30.285 [164/264] Linking static target lib/librte_dmadev.a 00:02:30.285 [165/264] Linking static target lib/librte_ethdev.a 00:02:30.543 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.543 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.543 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.543 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.801 [170/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.802 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.802 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.802 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.802 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.802 [175/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.060 [176/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.060 [177/264] Linking static target lib/librte_cryptodev.a 00:02:31.060 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.060 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.060 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.060 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.318 [182/264] Linking static target lib/librte_power.a 00:02:31.318 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.318 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.318 [185/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.318 [186/264] Linking static target lib/librte_reorder.a 00:02:31.577 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.577 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.577 [189/264] Linking static target lib/librte_security.a 00:02:31.835 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.835 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.835 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.094 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.094 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.094 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.094 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.094 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.094 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.352 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.352 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.352 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.352 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.352 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.352 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.352 [205/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.611 [206/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.611 [207/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.611 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.611 [209/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.611 [210/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.611 [211/264] Linking static target drivers/librte_bus_pci.a 00:02:32.611 [212/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.611 [213/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.611 [214/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.611 [215/264] Linking static target drivers/librte_bus_vdev.a 00:02:32.870 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.870 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.870 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.870 [219/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.870 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.870 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:32.870 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.128 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.386 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.761 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.761 [226/264] Linking target lib/librte_eal.so.24.0 00:02:34.761 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:34.761 [228/264] Linking target lib/librte_ring.so.24.0 00:02:34.761 [229/264] Linking target lib/librte_meter.so.24.0 00:02:34.761 [230/264] Linking target lib/librte_pci.so.24.0 00:02:34.761 [231/264] Linking target lib/librte_timer.so.24.0 00:02:34.761 [232/264] Linking target lib/librte_dmadev.so.24.0 00:02:34.761 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:34.761 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:34.761 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:34.761 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:34.761 [237/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:34.761 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:34.761 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:34.761 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:34.761 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:35.020 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:35.020 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:35.020 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:35.020 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:35.020 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:35.279 [247/264] Linking target lib/librte_compressdev.so.24.0 00:02:35.279 [248/264] Linking target lib/librte_reorder.so.24.0 00:02:35.279 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:35.279 [250/264] Linking target lib/librte_net.so.24.0 00:02:35.279 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:35.279 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:35.279 [253/264] Linking target lib/librte_hash.so.24.0 00:02:35.279 [254/264] Linking target lib/librte_security.so.24.0 00:02:35.279 [255/264] Linking target lib/librte_cmdline.so.24.0 00:02:35.538 [256/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.538 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:35.538 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:35.538 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:35.538 [260/264] Linking target lib/librte_power.so.24.0 00:02:35.796 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.054 [262/264] Linking static target lib/librte_vhost.a 00:02:37.427 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.427 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:37.427 INFO: autodetecting backend as ninja 00:02:37.427 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:37.992 CC lib/log/log.o 00:02:37.992 CC lib/ut_mock/mock.o 00:02:37.992 CC lib/log/log_flags.o 00:02:37.992 CC lib/log/log_deprecated.o 00:02:38.249 CC lib/ut/ut.o 00:02:38.249 LIB libspdk_ut_mock.a 00:02:38.249 LIB libspdk_log.a 00:02:38.250 SO libspdk_ut_mock.so.5.0 00:02:38.250 LIB libspdk_ut.a 00:02:38.250 SO libspdk_ut.so.1.0 00:02:38.250 SO libspdk_log.so.6.1 00:02:38.250 SYMLINK libspdk_ut_mock.so 00:02:38.250 SYMLINK libspdk_ut.so 00:02:38.250 SYMLINK libspdk_log.so 00:02:38.507 CXX lib/trace_parser/trace.o 00:02:38.507 CC lib/dma/dma.o 00:02:38.507 CC lib/ioat/ioat.o 00:02:38.508 CC lib/util/base64.o 00:02:38.508 CC lib/util/bit_array.o 00:02:38.508 CC lib/util/cpuset.o 00:02:38.508 CC lib/util/crc16.o 00:02:38.508 CC lib/util/crc32.o 00:02:38.508 CC lib/util/crc32c.o 00:02:38.508 CC lib/vfio_user/host/vfio_user_pci.o 00:02:38.508 CC lib/util/crc32_ieee.o 00:02:38.508 CC lib/util/crc64.o 00:02:38.508 CC lib/util/dif.o 00:02:38.508 CC lib/util/fd.o 00:02:38.508 LIB libspdk_dma.a 00:02:38.508 CC lib/util/file.o 00:02:38.508 SO libspdk_dma.so.3.0 00:02:38.766 CC lib/util/hexlify.o 00:02:38.766 LIB libspdk_ioat.a 00:02:38.766 CC lib/util/iov.o 00:02:38.766 CC lib/util/math.o 00:02:38.766 SO libspdk_ioat.so.6.0 00:02:38.766 SYMLINK libspdk_dma.so 00:02:38.766 CC lib/util/pipe.o 00:02:38.766 CC lib/vfio_user/host/vfio_user.o 00:02:38.766 CC lib/util/strerror_tls.o 00:02:38.766 SYMLINK libspdk_ioat.so 00:02:38.766 CC lib/util/string.o 00:02:38.766 CC lib/util/uuid.o 00:02:38.766 CC lib/util/fd_group.o 00:02:38.766 CC lib/util/xor.o 00:02:38.766 CC lib/util/zipf.o 00:02:38.766 LIB libspdk_vfio_user.a 00:02:38.766 SO libspdk_vfio_user.so.4.0 00:02:39.023 SYMLINK libspdk_vfio_user.so 00:02:39.023 LIB libspdk_util.a 00:02:39.281 SO libspdk_util.so.8.0 00:02:39.281 LIB libspdk_trace_parser.a 00:02:39.281 SYMLINK libspdk_util.so 00:02:39.281 SO libspdk_trace_parser.so.4.0 00:02:39.281 SYMLINK libspdk_trace_parser.so 00:02:39.281 CC lib/vmd/led.o 00:02:39.281 CC lib/vmd/vmd.o 00:02:39.281 CC lib/json/json_parse.o 00:02:39.281 CC lib/json/json_util.o 00:02:39.281 CC lib/idxd/idxd.o 00:02:39.281 CC lib/json/json_write.o 00:02:39.281 CC lib/idxd/idxd_user.o 00:02:39.281 CC lib/env_dpdk/env.o 00:02:39.281 CC lib/conf/conf.o 00:02:39.281 CC lib/rdma/common.o 00:02:39.557 CC lib/env_dpdk/memory.o 00:02:39.557 LIB libspdk_conf.a 00:02:39.557 SO libspdk_conf.so.5.0 00:02:39.557 CC lib/env_dpdk/pci.o 00:02:39.557 SYMLINK libspdk_conf.so 00:02:39.557 CC lib/rdma/rdma_verbs.o 00:02:39.557 CC lib/env_dpdk/init.o 00:02:39.557 CC lib/env_dpdk/threads.o 00:02:39.557 CC lib/env_dpdk/pci_ioat.o 00:02:39.557 LIB libspdk_json.a 00:02:39.867 SO libspdk_json.so.5.1 00:02:39.867 CC lib/env_dpdk/pci_virtio.o 00:02:39.867 SYMLINK libspdk_json.so 00:02:39.867 CC lib/env_dpdk/pci_vmd.o 00:02:39.867 LIB libspdk_rdma.a 00:02:39.867 CC lib/env_dpdk/pci_idxd.o 00:02:39.867 SO libspdk_rdma.so.5.0 00:02:39.867 CC lib/env_dpdk/pci_event.o 00:02:39.867 SYMLINK libspdk_rdma.so 00:02:39.867 CC lib/env_dpdk/sigbus_handler.o 00:02:39.867 CC lib/env_dpdk/pci_dpdk.o 00:02:39.867 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.867 CC lib/idxd/idxd_kernel.o 00:02:39.867 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.133 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.133 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.133 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.133 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.133 LIB libspdk_vmd.a 00:02:40.133 LIB libspdk_idxd.a 00:02:40.133 SO libspdk_vmd.so.5.0 00:02:40.133 SO libspdk_idxd.so.11.0 00:02:40.133 SYMLINK libspdk_vmd.so 00:02:40.133 SYMLINK libspdk_idxd.so 00:02:40.133 LIB libspdk_jsonrpc.a 00:02:40.391 SO libspdk_jsonrpc.so.5.1 00:02:40.391 SYMLINK libspdk_jsonrpc.so 00:02:40.649 CC lib/rpc/rpc.o 00:02:40.649 LIB libspdk_rpc.a 00:02:40.649 SO libspdk_rpc.so.5.0 00:02:40.649 SYMLINK libspdk_rpc.so 00:02:40.649 LIB libspdk_env_dpdk.a 00:02:40.907 SO libspdk_env_dpdk.so.13.0 00:02:40.907 CC lib/sock/sock.o 00:02:40.907 CC lib/sock/sock_rpc.o 00:02:40.907 CC lib/trace/trace.o 00:02:40.907 CC lib/trace/trace_rpc.o 00:02:40.907 CC lib/trace/trace_flags.o 00:02:40.907 CC lib/notify/notify.o 00:02:40.907 CC lib/notify/notify_rpc.o 00:02:40.907 SYMLINK libspdk_env_dpdk.so 00:02:40.907 LIB libspdk_notify.a 00:02:40.907 SO libspdk_notify.so.5.0 00:02:40.907 SYMLINK libspdk_notify.so 00:02:41.164 LIB libspdk_trace.a 00:02:41.164 SO libspdk_trace.so.9.0 00:02:41.164 SYMLINK libspdk_trace.so 00:02:41.164 LIB libspdk_sock.a 00:02:41.164 SO libspdk_sock.so.8.0 00:02:41.164 CC lib/thread/thread.o 00:02:41.164 CC lib/thread/iobuf.o 00:02:41.422 SYMLINK libspdk_sock.so 00:02:41.422 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:41.422 CC lib/nvme/nvme_ns.o 00:02:41.422 CC lib/nvme/nvme_ctrlr.o 00:02:41.422 CC lib/nvme/nvme_fabric.o 00:02:41.422 CC lib/nvme/nvme_qpair.o 00:02:41.422 CC lib/nvme/nvme_ns_cmd.o 00:02:41.422 CC lib/nvme/nvme_pcie_common.o 00:02:41.422 CC lib/nvme/nvme_pcie.o 00:02:41.679 CC lib/nvme/nvme.o 00:02:41.937 CC lib/nvme/nvme_quirks.o 00:02:41.937 CC lib/nvme/nvme_transport.o 00:02:42.194 CC lib/nvme/nvme_discovery.o 00:02:42.194 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.194 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.194 CC lib/nvme/nvme_tcp.o 00:02:42.451 CC lib/nvme/nvme_opal.o 00:02:42.451 CC lib/nvme/nvme_io_msg.o 00:02:42.451 CC lib/nvme/nvme_poll_group.o 00:02:42.709 CC lib/nvme/nvme_zns.o 00:02:42.709 CC lib/nvme/nvme_cuse.o 00:02:42.709 CC lib/nvme/nvme_vfio_user.o 00:02:42.709 CC lib/nvme/nvme_rdma.o 00:02:42.709 LIB libspdk_thread.a 00:02:42.709 SO libspdk_thread.so.9.0 00:02:42.709 SYMLINK libspdk_thread.so 00:02:42.967 CC lib/accel/accel.o 00:02:42.967 CC lib/accel/accel_rpc.o 00:02:42.967 CC lib/blob/blobstore.o 00:02:43.224 CC lib/init/json_config.o 00:02:43.224 CC lib/accel/accel_sw.o 00:02:43.224 CC lib/blob/request.o 00:02:43.224 CC lib/init/subsystem.o 00:02:43.482 CC lib/virtio/virtio.o 00:02:43.482 CC lib/virtio/virtio_vhost_user.o 00:02:43.482 CC lib/virtio/virtio_vfio_user.o 00:02:43.482 CC lib/virtio/virtio_pci.o 00:02:43.482 CC lib/init/subsystem_rpc.o 00:02:43.739 CC lib/init/rpc.o 00:02:43.739 CC lib/blob/zeroes.o 00:02:43.739 CC lib/blob/blob_bs_dev.o 00:02:43.739 LIB libspdk_init.a 00:02:43.739 LIB libspdk_nvme.a 00:02:43.739 SO libspdk_init.so.4.0 00:02:43.739 LIB libspdk_virtio.a 00:02:43.739 SO libspdk_virtio.so.6.0 00:02:43.739 SYMLINK libspdk_init.so 00:02:43.739 SO libspdk_nvme.so.12.0 00:02:43.997 SYMLINK libspdk_virtio.so 00:02:43.997 LIB libspdk_accel.a 00:02:43.997 CC lib/event/app.o 00:02:43.997 CC lib/event/app_rpc.o 00:02:43.997 CC lib/event/reactor.o 00:02:43.997 CC lib/event/log_rpc.o 00:02:43.997 CC lib/event/scheduler_static.o 00:02:43.997 SO libspdk_accel.so.14.0 00:02:43.997 SYMLINK libspdk_accel.so 00:02:43.997 SYMLINK libspdk_nvme.so 00:02:44.255 CC lib/bdev/part.o 00:02:44.255 CC lib/bdev/scsi_nvme.o 00:02:44.255 CC lib/bdev/bdev.o 00:02:44.255 CC lib/bdev/bdev_rpc.o 00:02:44.255 CC lib/bdev/bdev_zone.o 00:02:44.255 LIB libspdk_event.a 00:02:44.255 SO libspdk_event.so.12.0 00:02:44.513 SYMLINK libspdk_event.so 00:02:45.888 LIB libspdk_blob.a 00:02:45.888 SO libspdk_blob.so.10.1 00:02:45.888 SYMLINK libspdk_blob.so 00:02:46.146 CC lib/lvol/lvol.o 00:02:46.146 CC lib/blobfs/blobfs.o 00:02:46.146 CC lib/blobfs/tree.o 00:02:46.711 LIB libspdk_lvol.a 00:02:46.711 SO libspdk_lvol.so.9.1 00:02:46.711 SYMLINK libspdk_lvol.so 00:02:46.970 LIB libspdk_bdev.a 00:02:46.970 LIB libspdk_blobfs.a 00:02:46.970 SO libspdk_blobfs.so.9.0 00:02:46.970 SO libspdk_bdev.so.14.0 00:02:46.970 SYMLINK libspdk_blobfs.so 00:02:46.970 SYMLINK libspdk_bdev.so 00:02:47.228 CC lib/ublk/ublk.o 00:02:47.228 CC lib/ublk/ublk_rpc.o 00:02:47.228 CC lib/ftl/ftl_core.o 00:02:47.228 CC lib/nbd/nbd.o 00:02:47.228 CC lib/ftl/ftl_init.o 00:02:47.228 CC lib/ftl/ftl_layout.o 00:02:47.228 CC lib/ftl/ftl_debug.o 00:02:47.228 CC lib/nbd/nbd_rpc.o 00:02:47.228 CC lib/scsi/dev.o 00:02:47.228 CC lib/nvmf/ctrlr.o 00:02:47.228 CC lib/ftl/ftl_io.o 00:02:47.228 CC lib/ftl/ftl_sb.o 00:02:47.228 CC lib/ftl/ftl_l2p.o 00:02:47.228 CC lib/nvmf/ctrlr_discovery.o 00:02:47.486 CC lib/scsi/lun.o 00:02:47.486 CC lib/ftl/ftl_l2p_flat.o 00:02:47.486 CC lib/ftl/ftl_nv_cache.o 00:02:47.486 CC lib/ftl/ftl_band.o 00:02:47.486 CC lib/ftl/ftl_band_ops.o 00:02:47.486 CC lib/ftl/ftl_writer.o 00:02:47.486 LIB libspdk_nbd.a 00:02:47.486 SO libspdk_nbd.so.6.0 00:02:47.486 CC lib/scsi/port.o 00:02:47.486 CC lib/ftl/ftl_rq.o 00:02:47.486 SYMLINK libspdk_nbd.so 00:02:47.486 CC lib/ftl/ftl_reloc.o 00:02:47.745 CC lib/ftl/ftl_l2p_cache.o 00:02:47.745 CC lib/scsi/scsi.o 00:02:47.745 CC lib/ftl/ftl_p2l.o 00:02:47.745 LIB libspdk_ublk.a 00:02:47.745 CC lib/nvmf/ctrlr_bdev.o 00:02:47.745 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.745 SO libspdk_ublk.so.2.0 00:02:47.746 CC lib/scsi/scsi_bdev.o 00:02:47.746 CC lib/scsi/scsi_pr.o 00:02:48.004 SYMLINK libspdk_ublk.so 00:02:48.004 CC lib/scsi/scsi_rpc.o 00:02:48.004 CC lib/scsi/task.o 00:02:48.004 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:48.004 CC lib/nvmf/subsystem.o 00:02:48.004 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:48.004 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:48.004 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.262 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.262 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.262 CC lib/nvmf/nvmf.o 00:02:48.262 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.262 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.263 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.263 LIB libspdk_scsi.a 00:02:48.263 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.263 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.263 SO libspdk_scsi.so.8.0 00:02:48.263 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.522 SYMLINK libspdk_scsi.so 00:02:48.522 CC lib/nvmf/nvmf_rpc.o 00:02:48.522 CC lib/ftl/utils/ftl_conf.o 00:02:48.522 CC lib/ftl/utils/ftl_md.o 00:02:48.522 CC lib/ftl/utils/ftl_mempool.o 00:02:48.522 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.522 CC lib/ftl/utils/ftl_property.o 00:02:48.522 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:48.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:48.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:48.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:48.780 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:48.780 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:48.780 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.780 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.780 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.780 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.780 CC lib/ftl/base/ftl_base_dev.o 00:02:48.780 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.780 CC lib/ftl/ftl_trace.o 00:02:48.780 CC lib/nvmf/transport.o 00:02:49.040 CC lib/nvmf/tcp.o 00:02:49.040 CC lib/nvmf/rdma.o 00:02:49.040 CC lib/iscsi/conn.o 00:02:49.040 CC lib/iscsi/init_grp.o 00:02:49.040 LIB libspdk_ftl.a 00:02:49.040 CC lib/iscsi/iscsi.o 00:02:49.040 CC lib/vhost/vhost.o 00:02:49.040 CC lib/iscsi/md5.o 00:02:49.040 CC lib/iscsi/param.o 00:02:49.298 SO libspdk_ftl.so.8.0 00:02:49.298 CC lib/iscsi/portal_grp.o 00:02:49.298 CC lib/iscsi/tgt_node.o 00:02:49.298 CC lib/iscsi/iscsi_subsystem.o 00:02:49.298 SYMLINK libspdk_ftl.so 00:02:49.298 CC lib/iscsi/iscsi_rpc.o 00:02:49.557 CC lib/iscsi/task.o 00:02:49.557 CC lib/vhost/vhost_rpc.o 00:02:49.557 CC lib/vhost/vhost_scsi.o 00:02:49.557 CC lib/vhost/vhost_blk.o 00:02:49.557 CC lib/vhost/rte_vhost_user.o 00:02:50.504 LIB libspdk_iscsi.a 00:02:50.504 LIB libspdk_vhost.a 00:02:50.504 SO libspdk_iscsi.so.7.0 00:02:50.763 SO libspdk_vhost.so.7.1 00:02:50.763 SYMLINK libspdk_iscsi.so 00:02:50.763 SYMLINK libspdk_vhost.so 00:02:51.021 LIB libspdk_nvmf.a 00:02:51.021 SO libspdk_nvmf.so.17.0 00:02:51.279 SYMLINK libspdk_nvmf.so 00:02:51.538 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.538 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.538 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.538 CC module/accel/iaa/accel_iaa.o 00:02:51.538 CC module/sock/posix/posix.o 00:02:51.538 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.538 CC module/accel/ioat/accel_ioat.o 00:02:51.538 CC module/accel/dsa/accel_dsa.o 00:02:51.538 CC module/accel/error/accel_error.o 00:02:51.538 CC module/blob/bdev/blob_bdev.o 00:02:51.538 LIB libspdk_env_dpdk_rpc.a 00:02:51.538 SO libspdk_env_dpdk_rpc.so.5.0 00:02:51.538 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.538 CC module/accel/error/accel_error_rpc.o 00:02:51.538 LIB libspdk_scheduler_gscheduler.a 00:02:51.538 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.538 LIB libspdk_scheduler_dynamic.a 00:02:51.538 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:51.538 SO libspdk_scheduler_dynamic.so.3.0 00:02:51.538 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.538 SO libspdk_scheduler_gscheduler.so.3.0 00:02:51.538 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.796 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.796 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.796 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.796 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.796 LIB libspdk_accel_error.a 00:02:51.796 LIB libspdk_blob_bdev.a 00:02:51.796 SO libspdk_blob_bdev.so.10.1 00:02:51.796 SO libspdk_accel_error.so.1.0 00:02:51.796 LIB libspdk_accel_ioat.a 00:02:51.796 LIB libspdk_accel_iaa.a 00:02:51.796 SO libspdk_accel_ioat.so.5.0 00:02:51.796 SO libspdk_accel_iaa.so.2.0 00:02:51.796 SYMLINK libspdk_blob_bdev.so 00:02:51.796 SYMLINK libspdk_accel_error.so 00:02:51.796 LIB libspdk_accel_dsa.a 00:02:51.796 SYMLINK libspdk_accel_ioat.so 00:02:51.796 SYMLINK libspdk_accel_iaa.so 00:02:51.796 SO libspdk_accel_dsa.so.4.0 00:02:51.796 SYMLINK libspdk_accel_dsa.so 00:02:51.796 CC module/bdev/gpt/gpt.o 00:02:51.796 CC module/bdev/delay/vbdev_delay.o 00:02:51.796 CC module/bdev/nvme/bdev_nvme.o 00:02:51.796 CC module/bdev/null/bdev_null.o 00:02:51.796 CC module/bdev/malloc/bdev_malloc.o 00:02:51.796 CC module/bdev/error/vbdev_error.o 00:02:51.796 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.796 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.054 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.054 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.054 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.054 LIB libspdk_sock_posix.a 00:02:52.054 SO libspdk_sock_posix.so.5.0 00:02:52.054 CC module/bdev/null/bdev_null_rpc.o 00:02:52.054 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.054 SYMLINK libspdk_sock_posix.so 00:02:52.054 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.312 LIB libspdk_blobfs_bdev.a 00:02:52.312 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.312 SO libspdk_blobfs_bdev.so.5.0 00:02:52.312 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.312 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.312 SYMLINK libspdk_blobfs_bdev.so 00:02:52.312 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.312 CC module/bdev/nvme/nvme_rpc.o 00:02:52.312 LIB libspdk_bdev_error.a 00:02:52.312 LIB libspdk_bdev_null.a 00:02:52.312 LIB libspdk_bdev_gpt.a 00:02:52.312 SO libspdk_bdev_error.so.5.0 00:02:52.312 SO libspdk_bdev_null.so.5.0 00:02:52.312 SO libspdk_bdev_gpt.so.5.0 00:02:52.312 LIB libspdk_bdev_malloc.a 00:02:52.312 LIB libspdk_bdev_passthru.a 00:02:52.312 SO libspdk_bdev_malloc.so.5.0 00:02:52.312 SYMLINK libspdk_bdev_gpt.so 00:02:52.312 SO libspdk_bdev_passthru.so.5.0 00:02:52.312 SYMLINK libspdk_bdev_null.so 00:02:52.312 SYMLINK libspdk_bdev_error.so 00:02:52.312 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.312 CC module/bdev/nvme/vbdev_opal.o 00:02:52.312 LIB libspdk_bdev_delay.a 00:02:52.312 LIB libspdk_bdev_lvol.a 00:02:52.312 SO libspdk_bdev_delay.so.5.0 00:02:52.312 SYMLINK libspdk_bdev_passthru.so 00:02:52.312 SYMLINK libspdk_bdev_malloc.so 00:02:52.312 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.312 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.312 SO libspdk_bdev_lvol.so.5.0 00:02:52.571 SYMLINK libspdk_bdev_delay.so 00:02:52.571 CC module/bdev/raid/bdev_raid.o 00:02:52.571 SYMLINK libspdk_bdev_lvol.so 00:02:52.571 CC module/bdev/split/vbdev_split.o 00:02:52.571 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.571 CC module/bdev/xnvme/bdev_xnvme.o 00:02:52.571 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.571 CC module/bdev/aio/bdev_aio.o 00:02:52.571 CC module/bdev/ftl/bdev_ftl.o 00:02:52.571 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.828 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.828 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.828 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:52.828 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.828 LIB libspdk_bdev_zone_block.a 00:02:52.828 SO libspdk_bdev_zone_block.so.5.0 00:02:52.828 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.828 LIB libspdk_bdev_split.a 00:02:52.828 SYMLINK libspdk_bdev_zone_block.so 00:02:52.828 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.828 LIB libspdk_bdev_iscsi.a 00:02:52.828 SO libspdk_bdev_split.so.5.0 00:02:52.828 SO libspdk_bdev_iscsi.so.5.0 00:02:52.828 LIB libspdk_bdev_xnvme.a 00:02:53.087 SYMLINK libspdk_bdev_iscsi.so 00:02:53.087 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.087 SYMLINK libspdk_bdev_split.so 00:02:53.087 CC module/bdev/raid/raid0.o 00:02:53.087 LIB libspdk_bdev_ftl.a 00:02:53.087 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.087 CC module/bdev/raid/raid1.o 00:02:53.087 SO libspdk_bdev_xnvme.so.2.0 00:02:53.087 SO libspdk_bdev_ftl.so.5.0 00:02:53.087 LIB libspdk_bdev_aio.a 00:02:53.087 SYMLINK libspdk_bdev_xnvme.so 00:02:53.087 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.087 SO libspdk_bdev_aio.so.5.0 00:02:53.087 SYMLINK libspdk_bdev_ftl.so 00:02:53.087 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.087 CC module/bdev/raid/concat.o 00:02:53.087 SYMLINK libspdk_bdev_aio.so 00:02:53.346 LIB libspdk_bdev_raid.a 00:02:53.346 SO libspdk_bdev_raid.so.5.0 00:02:53.346 LIB libspdk_bdev_virtio.a 00:02:53.346 SYMLINK libspdk_bdev_raid.so 00:02:53.346 SO libspdk_bdev_virtio.so.5.0 00:02:53.606 SYMLINK libspdk_bdev_virtio.so 00:02:53.606 LIB libspdk_bdev_nvme.a 00:02:53.606 SO libspdk_bdev_nvme.so.6.0 00:02:53.864 SYMLINK libspdk_bdev_nvme.so 00:02:54.124 CC module/event/subsystems/iobuf/iobuf.o 00:02:54.124 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:54.124 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:54.124 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:54.124 CC module/event/subsystems/vmd/vmd.o 00:02:54.124 CC module/event/subsystems/sock/sock.o 00:02:54.124 CC module/event/subsystems/scheduler/scheduler.o 00:02:54.124 LIB libspdk_event_vhost_blk.a 00:02:54.124 LIB libspdk_event_sock.a 00:02:54.124 LIB libspdk_event_vmd.a 00:02:54.124 SO libspdk_event_vhost_blk.so.2.0 00:02:54.124 SO libspdk_event_sock.so.4.0 00:02:54.124 LIB libspdk_event_iobuf.a 00:02:54.124 LIB libspdk_event_scheduler.a 00:02:54.124 SO libspdk_event_vmd.so.5.0 00:02:54.124 SO libspdk_event_scheduler.so.3.0 00:02:54.124 SO libspdk_event_iobuf.so.2.0 00:02:54.124 SYMLINK libspdk_event_vhost_blk.so 00:02:54.124 SYMLINK libspdk_event_sock.so 00:02:54.124 SYMLINK libspdk_event_vmd.so 00:02:54.124 SYMLINK libspdk_event_scheduler.so 00:02:54.124 SYMLINK libspdk_event_iobuf.so 00:02:54.385 CC module/event/subsystems/accel/accel.o 00:02:54.385 LIB libspdk_event_accel.a 00:02:54.644 SO libspdk_event_accel.so.5.0 00:02:54.644 SYMLINK libspdk_event_accel.so 00:02:54.644 CC module/event/subsystems/bdev/bdev.o 00:02:54.902 LIB libspdk_event_bdev.a 00:02:54.902 SO libspdk_event_bdev.so.5.0 00:02:54.902 SYMLINK libspdk_event_bdev.so 00:02:55.161 CC module/event/subsystems/scsi/scsi.o 00:02:55.161 CC module/event/subsystems/ublk/ublk.o 00:02:55.161 CC module/event/subsystems/nbd/nbd.o 00:02:55.161 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:55.161 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:55.161 LIB libspdk_event_ublk.a 00:02:55.161 LIB libspdk_event_nbd.a 00:02:55.161 LIB libspdk_event_scsi.a 00:02:55.161 SO libspdk_event_ublk.so.2.0 00:02:55.161 SO libspdk_event_nbd.so.5.0 00:02:55.161 SO libspdk_event_scsi.so.5.0 00:02:55.161 LIB libspdk_event_nvmf.a 00:02:55.161 SYMLINK libspdk_event_ublk.so 00:02:55.161 SYMLINK libspdk_event_nbd.so 00:02:55.161 SYMLINK libspdk_event_scsi.so 00:02:55.161 SO libspdk_event_nvmf.so.5.0 00:02:55.161 SYMLINK libspdk_event_nvmf.so 00:02:55.420 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:55.420 CC module/event/subsystems/iscsi/iscsi.o 00:02:55.420 LIB libspdk_event_vhost_scsi.a 00:02:55.420 SO libspdk_event_vhost_scsi.so.2.0 00:02:55.420 LIB libspdk_event_iscsi.a 00:02:55.420 SO libspdk_event_iscsi.so.5.0 00:02:55.420 SYMLINK libspdk_event_vhost_scsi.so 00:02:55.678 SYMLINK libspdk_event_iscsi.so 00:02:55.678 SO libspdk.so.5.0 00:02:55.678 SYMLINK libspdk.so 00:02:55.678 CXX app/trace/trace.o 00:02:55.936 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.936 CC examples/ioat/perf/perf.o 00:02:55.936 CC examples/sock/hello_world/hello_sock.o 00:02:55.936 CC examples/nvme/hello_world/hello_world.o 00:02:55.936 CC examples/accel/perf/accel_perf.o 00:02:55.936 CC examples/blob/hello_world/hello_blob.o 00:02:55.936 CC examples/nvmf/nvmf/nvmf.o 00:02:55.936 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.936 CC test/accel/dif/dif.o 00:02:55.936 LINK lsvmd 00:02:55.936 LINK hello_sock 00:02:55.936 LINK ioat_perf 00:02:55.937 LINK hello_world 00:02:56.195 LINK hello_blob 00:02:56.195 LINK hello_bdev 00:02:56.195 LINK spdk_trace 00:02:56.195 CC examples/vmd/led/led.o 00:02:56.195 LINK nvmf 00:02:56.195 CC examples/ioat/verify/verify.o 00:02:56.195 CC examples/nvme/reconnect/reconnect.o 00:02:56.195 LINK dif 00:02:56.195 CC examples/util/zipf/zipf.o 00:02:56.195 LINK led 00:02:56.195 LINK accel_perf 00:02:56.195 CC examples/blob/cli/blobcli.o 00:02:56.195 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.195 CC app/trace_record/trace_record.o 00:02:56.454 LINK verify 00:02:56.454 CC examples/thread/thread/thread_ex.o 00:02:56.454 LINK zipf 00:02:56.454 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:56.454 CC examples/idxd/perf/perf.o 00:02:56.454 LINK reconnect 00:02:56.454 LINK spdk_trace_record 00:02:56.454 CC test/app/bdev_svc/bdev_svc.o 00:02:56.454 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.713 LINK interrupt_tgt 00:02:56.713 CC app/nvmf_tgt/nvmf_main.o 00:02:56.713 LINK thread 00:02:56.713 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.713 LINK bdev_svc 00:02:56.713 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.713 LINK idxd_perf 00:02:56.713 LINK blobcli 00:02:56.713 LINK nvmf_tgt 00:02:56.713 CC test/app/histogram_perf/histogram_perf.o 00:02:56.971 LINK iscsi_tgt 00:02:56.971 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.971 CC app/spdk_tgt/spdk_tgt.o 00:02:56.971 LINK histogram_perf 00:02:56.971 CC test/bdev/bdevio/bdevio.o 00:02:56.971 LINK nvme_fuzz 00:02:56.971 CC test/blobfs/mkfs/mkfs.o 00:02:56.971 CC app/spdk_lspci/spdk_lspci.o 00:02:56.971 CC examples/nvme/arbitration/arbitration.o 00:02:56.971 LINK nvme_manage 00:02:56.971 LINK spdk_tgt 00:02:56.971 CC examples/nvme/hotplug/hotplug.o 00:02:56.971 LINK bdevperf 00:02:57.229 LINK spdk_lspci 00:02:57.229 LINK mkfs 00:02:57.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.229 CC examples/nvme/abort/abort.o 00:02:57.229 TEST_HEADER include/spdk/accel.h 00:02:57.229 TEST_HEADER include/spdk/accel_module.h 00:02:57.229 TEST_HEADER include/spdk/assert.h 00:02:57.229 TEST_HEADER include/spdk/barrier.h 00:02:57.229 TEST_HEADER include/spdk/base64.h 00:02:57.229 TEST_HEADER include/spdk/bdev.h 00:02:57.229 TEST_HEADER include/spdk/bdev_module.h 00:02:57.229 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.229 TEST_HEADER include/spdk/bit_array.h 00:02:57.229 TEST_HEADER include/spdk/bit_pool.h 00:02:57.229 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.229 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.229 TEST_HEADER include/spdk/blobfs.h 00:02:57.229 TEST_HEADER include/spdk/blob.h 00:02:57.229 TEST_HEADER include/spdk/conf.h 00:02:57.229 TEST_HEADER include/spdk/config.h 00:02:57.229 TEST_HEADER include/spdk/cpuset.h 00:02:57.229 LINK bdevio 00:02:57.229 TEST_HEADER include/spdk/crc16.h 00:02:57.229 TEST_HEADER include/spdk/crc32.h 00:02:57.229 TEST_HEADER include/spdk/crc64.h 00:02:57.229 CC app/spdk_nvme_perf/perf.o 00:02:57.229 TEST_HEADER include/spdk/dif.h 00:02:57.229 TEST_HEADER include/spdk/dma.h 00:02:57.229 TEST_HEADER include/spdk/endian.h 00:02:57.229 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.229 TEST_HEADER include/spdk/env.h 00:02:57.229 TEST_HEADER include/spdk/event.h 00:02:57.229 CC test/app/jsoncat/jsoncat.o 00:02:57.229 TEST_HEADER include/spdk/fd_group.h 00:02:57.229 TEST_HEADER include/spdk/fd.h 00:02:57.229 TEST_HEADER include/spdk/file.h 00:02:57.229 CC test/app/stub/stub.o 00:02:57.229 TEST_HEADER include/spdk/ftl.h 00:02:57.229 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.229 TEST_HEADER include/spdk/hexlify.h 00:02:57.229 LINK hotplug 00:02:57.229 TEST_HEADER include/spdk/histogram_data.h 00:02:57.229 TEST_HEADER include/spdk/idxd.h 00:02:57.229 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.229 TEST_HEADER include/spdk/init.h 00:02:57.229 TEST_HEADER include/spdk/ioat.h 00:02:57.229 LINK cmb_copy 00:02:57.229 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.229 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.229 TEST_HEADER include/spdk/json.h 00:02:57.229 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.229 TEST_HEADER include/spdk/likely.h 00:02:57.229 TEST_HEADER include/spdk/log.h 00:02:57.229 TEST_HEADER include/spdk/lvol.h 00:02:57.229 TEST_HEADER include/spdk/memory.h 00:02:57.229 LINK arbitration 00:02:57.229 TEST_HEADER include/spdk/mmio.h 00:02:57.229 TEST_HEADER include/spdk/nbd.h 00:02:57.229 TEST_HEADER include/spdk/notify.h 00:02:57.229 TEST_HEADER include/spdk/nvme.h 00:02:57.229 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.229 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.229 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.229 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.229 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.229 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.229 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.229 TEST_HEADER include/spdk/nvmf.h 00:02:57.229 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.229 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.229 TEST_HEADER include/spdk/opal.h 00:02:57.229 TEST_HEADER include/spdk/opal_spec.h 00:02:57.229 TEST_HEADER include/spdk/pci_ids.h 00:02:57.229 TEST_HEADER include/spdk/pipe.h 00:02:57.229 TEST_HEADER include/spdk/queue.h 00:02:57.229 TEST_HEADER include/spdk/reduce.h 00:02:57.229 TEST_HEADER include/spdk/rpc.h 00:02:57.229 TEST_HEADER include/spdk/scheduler.h 00:02:57.229 TEST_HEADER include/spdk/scsi.h 00:02:57.229 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.229 TEST_HEADER include/spdk/sock.h 00:02:57.229 TEST_HEADER include/spdk/stdinc.h 00:02:57.229 TEST_HEADER include/spdk/string.h 00:02:57.229 TEST_HEADER include/spdk/thread.h 00:02:57.229 TEST_HEADER include/spdk/trace.h 00:02:57.229 TEST_HEADER include/spdk/trace_parser.h 00:02:57.229 TEST_HEADER include/spdk/tree.h 00:02:57.229 TEST_HEADER include/spdk/ublk.h 00:02:57.229 TEST_HEADER include/spdk/util.h 00:02:57.229 TEST_HEADER include/spdk/uuid.h 00:02:57.229 TEST_HEADER include/spdk/version.h 00:02:57.229 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.229 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.488 TEST_HEADER include/spdk/vhost.h 00:02:57.488 TEST_HEADER include/spdk/vmd.h 00:02:57.488 TEST_HEADER include/spdk/xor.h 00:02:57.488 TEST_HEADER include/spdk/zipf.h 00:02:57.488 CXX test/cpp_headers/accel.o 00:02:57.488 LINK jsoncat 00:02:57.488 CXX test/cpp_headers/accel_module.o 00:02:57.488 LINK stub 00:02:57.488 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.488 LINK abort 00:02:57.488 CC app/spdk_nvme_identify/identify.o 00:02:57.488 CXX test/cpp_headers/assert.o 00:02:57.488 CC test/dma/test_dma/test_dma.o 00:02:57.488 CXX test/cpp_headers/barrier.o 00:02:57.488 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.488 CXX test/cpp_headers/base64.o 00:02:57.488 CXX test/cpp_headers/bdev.o 00:02:57.747 LINK pmr_persistence 00:02:57.747 CXX test/cpp_headers/bdev_module.o 00:02:57.747 CXX test/cpp_headers/bdev_zone.o 00:02:57.747 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.747 CXX test/cpp_headers/bit_array.o 00:02:57.747 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.747 LINK test_dma 00:02:57.747 CXX test/cpp_headers/bit_pool.o 00:02:58.031 CC app/spdk_top/spdk_top.o 00:02:58.031 LINK spdk_nvme_perf 00:02:58.031 LINK vhost_fuzz 00:02:58.031 LINK spdk_nvme_discover 00:02:58.031 CC app/vhost/vhost.o 00:02:58.031 CXX test/cpp_headers/blob_bdev.o 00:02:58.031 CC test/event/event_perf/event_perf.o 00:02:58.031 CC test/event/reactor/reactor.o 00:02:58.031 CC test/event/reactor_perf/reactor_perf.o 00:02:58.031 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.031 CC test/env/vtophys/vtophys.o 00:02:58.031 LINK vhost 00:02:58.031 LINK reactor 00:02:58.031 LINK reactor_perf 00:02:58.288 LINK event_perf 00:02:58.288 LINK spdk_nvme_identify 00:02:58.288 LINK vtophys 00:02:58.288 CXX test/cpp_headers/blobfs.o 00:02:58.288 CXX test/cpp_headers/blob.o 00:02:58.288 CXX test/cpp_headers/conf.o 00:02:58.288 CC test/event/app_repeat/app_repeat.o 00:02:58.288 LINK mem_callbacks 00:02:58.288 CC app/spdk_dd/spdk_dd.o 00:02:58.288 CC app/fio/nvme/fio_plugin.o 00:02:58.288 CC app/fio/bdev/fio_plugin.o 00:02:58.288 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.288 CC test/env/memory/memory_ut.o 00:02:58.546 CXX test/cpp_headers/config.o 00:02:58.546 CC test/env/pci/pci_ut.o 00:02:58.546 LINK app_repeat 00:02:58.546 CXX test/cpp_headers/cpuset.o 00:02:58.546 LINK env_dpdk_post_init 00:02:58.546 LINK spdk_dd 00:02:58.546 CXX test/cpp_headers/crc16.o 00:02:58.546 LINK iscsi_fuzz 00:02:58.546 CXX test/cpp_headers/crc32.o 00:02:58.546 CC test/event/scheduler/scheduler.o 00:02:58.804 CXX test/cpp_headers/crc64.o 00:02:58.804 CXX test/cpp_headers/dif.o 00:02:58.804 CXX test/cpp_headers/dma.o 00:02:58.804 LINK spdk_bdev 00:02:58.804 LINK spdk_top 00:02:58.804 CXX test/cpp_headers/endian.o 00:02:58.804 LINK pci_ut 00:02:58.804 LINK spdk_nvme 00:02:58.804 CXX test/cpp_headers/env_dpdk.o 00:02:58.804 CXX test/cpp_headers/env.o 00:02:58.804 LINK scheduler 00:02:58.804 CXX test/cpp_headers/event.o 00:02:58.804 CXX test/cpp_headers/fd_group.o 00:02:58.804 CC test/lvol/esnap/esnap.o 00:02:58.804 CXX test/cpp_headers/fd.o 00:02:58.804 CXX test/cpp_headers/file.o 00:02:59.061 CXX test/cpp_headers/ftl.o 00:02:59.061 CXX test/cpp_headers/gpt_spec.o 00:02:59.061 CXX test/cpp_headers/hexlify.o 00:02:59.061 CXX test/cpp_headers/histogram_data.o 00:02:59.061 CXX test/cpp_headers/idxd.o 00:02:59.061 CXX test/cpp_headers/idxd_spec.o 00:02:59.061 CXX test/cpp_headers/init.o 00:02:59.061 CXX test/cpp_headers/ioat.o 00:02:59.061 CXX test/cpp_headers/ioat_spec.o 00:02:59.061 CXX test/cpp_headers/iscsi_spec.o 00:02:59.061 CXX test/cpp_headers/json.o 00:02:59.061 CXX test/cpp_headers/jsonrpc.o 00:02:59.061 CXX test/cpp_headers/likely.o 00:02:59.061 CXX test/cpp_headers/log.o 00:02:59.061 CXX test/cpp_headers/lvol.o 00:02:59.061 CXX test/cpp_headers/memory.o 00:02:59.061 CXX test/cpp_headers/mmio.o 00:02:59.319 LINK memory_ut 00:02:59.319 CXX test/cpp_headers/nbd.o 00:02:59.319 CXX test/cpp_headers/notify.o 00:02:59.319 CC test/nvme/reset/reset.o 00:02:59.319 CXX test/cpp_headers/nvme.o 00:02:59.319 CC test/nvme/aer/aer.o 00:02:59.319 CXX test/cpp_headers/nvme_intel.o 00:02:59.319 CC test/nvme/sgl/sgl.o 00:02:59.319 CXX test/cpp_headers/nvme_ocssd.o 00:02:59.319 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:59.319 CC test/rpc_client/rpc_client_test.o 00:02:59.319 CXX test/cpp_headers/nvme_spec.o 00:02:59.319 CXX test/cpp_headers/nvme_zns.o 00:02:59.319 CXX test/cpp_headers/nvmf_cmd.o 00:02:59.576 LINK rpc_client_test 00:02:59.576 CC test/nvme/e2edp/nvme_dp.o 00:02:59.576 LINK aer 00:02:59.576 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:59.576 CC test/thread/poller_perf/poller_perf.o 00:02:59.576 LINK reset 00:02:59.576 CXX test/cpp_headers/nvmf.o 00:02:59.576 LINK sgl 00:02:59.576 CXX test/cpp_headers/nvmf_spec.o 00:02:59.576 CXX test/cpp_headers/nvmf_transport.o 00:02:59.576 CXX test/cpp_headers/opal.o 00:02:59.576 LINK poller_perf 00:02:59.576 CC test/nvme/overhead/overhead.o 00:02:59.834 CXX test/cpp_headers/opal_spec.o 00:02:59.834 CXX test/cpp_headers/pci_ids.o 00:02:59.834 CC test/nvme/err_injection/err_injection.o 00:02:59.834 LINK nvme_dp 00:02:59.834 CC test/nvme/startup/startup.o 00:02:59.834 CXX test/cpp_headers/pipe.o 00:02:59.834 CXX test/cpp_headers/queue.o 00:02:59.834 CXX test/cpp_headers/reduce.o 00:02:59.834 CC test/nvme/reserve/reserve.o 00:02:59.834 CXX test/cpp_headers/rpc.o 00:02:59.834 CXX test/cpp_headers/scheduler.o 00:02:59.834 CC test/nvme/simple_copy/simple_copy.o 00:02:59.834 LINK startup 00:02:59.834 LINK err_injection 00:02:59.834 CXX test/cpp_headers/scsi.o 00:02:59.834 CXX test/cpp_headers/scsi_spec.o 00:02:59.834 CXX test/cpp_headers/sock.o 00:03:00.091 LINK overhead 00:03:00.091 CXX test/cpp_headers/stdinc.o 00:03:00.091 LINK reserve 00:03:00.091 CXX test/cpp_headers/string.o 00:03:00.091 CXX test/cpp_headers/thread.o 00:03:00.091 CC test/nvme/connect_stress/connect_stress.o 00:03:00.091 CC test/nvme/boot_partition/boot_partition.o 00:03:00.091 LINK simple_copy 00:03:00.091 CXX test/cpp_headers/trace.o 00:03:00.091 CC test/nvme/compliance/nvme_compliance.o 00:03:00.091 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.091 CXX test/cpp_headers/trace_parser.o 00:03:00.091 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.091 CXX test/cpp_headers/tree.o 00:03:00.091 LINK connect_stress 00:03:00.091 LINK boot_partition 00:03:00.348 CXX test/cpp_headers/ublk.o 00:03:00.348 CC test/nvme/fdp/fdp.o 00:03:00.348 CC test/nvme/cuse/cuse.o 00:03:00.348 LINK fused_ordering 00:03:00.348 CXX test/cpp_headers/util.o 00:03:00.348 LINK doorbell_aers 00:03:00.348 CXX test/cpp_headers/uuid.o 00:03:00.348 CXX test/cpp_headers/version.o 00:03:00.348 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.348 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.348 CXX test/cpp_headers/vhost.o 00:03:00.348 LINK nvme_compliance 00:03:00.348 CXX test/cpp_headers/vmd.o 00:03:00.348 CXX test/cpp_headers/xor.o 00:03:00.348 CXX test/cpp_headers/zipf.o 00:03:00.606 LINK fdp 00:03:01.171 LINK cuse 00:03:03.070 LINK esnap 00:03:03.329 00:03:03.329 real 0m48.984s 00:03:03.329 user 4m44.899s 00:03:03.329 sys 1m0.089s 00:03:03.329 ************************************ 00:03:03.329 END TEST make 00:03:03.329 ************************************ 00:03:03.329 09:40:52 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:03.329 09:40:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.329 09:40:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:03.329 09:40:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:03.329 09:40:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:03.329 09:40:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:03.329 09:40:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:03.329 09:40:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:03.329 09:40:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:03.329 09:40:52 -- scripts/common.sh@335 -- # IFS=.-: 00:03:03.329 09:40:52 -- scripts/common.sh@335 -- # read -ra ver1 00:03:03.329 09:40:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:03.329 09:40:52 -- scripts/common.sh@336 -- # read -ra ver2 00:03:03.329 09:40:52 -- scripts/common.sh@337 -- # local 'op=<' 00:03:03.329 09:40:52 -- scripts/common.sh@339 -- # ver1_l=2 00:03:03.329 09:40:52 -- scripts/common.sh@340 -- # ver2_l=1 00:03:03.329 09:40:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:03.329 09:40:52 -- scripts/common.sh@343 -- # case "$op" in 00:03:03.329 09:40:52 -- scripts/common.sh@344 -- # : 1 00:03:03.329 09:40:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:03.329 09:40:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.329 09:40:52 -- scripts/common.sh@364 -- # decimal 1 00:03:03.329 09:40:52 -- scripts/common.sh@352 -- # local d=1 00:03:03.329 09:40:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:03.329 09:40:52 -- scripts/common.sh@354 -- # echo 1 00:03:03.329 09:40:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:03.329 09:40:52 -- scripts/common.sh@365 -- # decimal 2 00:03:03.329 09:40:52 -- scripts/common.sh@352 -- # local d=2 00:03:03.329 09:40:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:03.329 09:40:52 -- scripts/common.sh@354 -- # echo 2 00:03:03.329 09:40:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:03.329 09:40:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:03.329 09:40:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:03.329 09:40:52 -- scripts/common.sh@367 -- # return 0 00:03:03.329 09:40:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:03.329 09:40:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:03.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.329 --rc genhtml_branch_coverage=1 00:03:03.329 --rc genhtml_function_coverage=1 00:03:03.329 --rc genhtml_legend=1 00:03:03.329 --rc geninfo_all_blocks=1 00:03:03.329 --rc geninfo_unexecuted_blocks=1 00:03:03.329 00:03:03.329 ' 00:03:03.329 09:40:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:03.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.329 --rc genhtml_branch_coverage=1 00:03:03.329 --rc genhtml_function_coverage=1 00:03:03.329 --rc genhtml_legend=1 00:03:03.329 --rc geninfo_all_blocks=1 00:03:03.329 --rc geninfo_unexecuted_blocks=1 00:03:03.329 00:03:03.329 ' 00:03:03.329 09:40:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:03.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.329 --rc genhtml_branch_coverage=1 00:03:03.329 --rc genhtml_function_coverage=1 00:03:03.329 --rc genhtml_legend=1 00:03:03.329 --rc geninfo_all_blocks=1 00:03:03.329 --rc geninfo_unexecuted_blocks=1 00:03:03.329 00:03:03.329 ' 00:03:03.329 09:40:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:03.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.329 --rc genhtml_branch_coverage=1 00:03:03.329 --rc genhtml_function_coverage=1 00:03:03.329 --rc genhtml_legend=1 00:03:03.329 --rc geninfo_all_blocks=1 00:03:03.329 --rc geninfo_unexecuted_blocks=1 00:03:03.329 00:03:03.329 ' 00:03:03.329 09:40:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:03.329 09:40:52 -- nvmf/common.sh@7 -- # uname -s 00:03:03.329 09:40:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:03.329 09:40:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:03.329 09:40:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:03.329 09:40:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:03.329 09:40:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:03.329 09:40:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:03.329 09:40:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:03.329 09:40:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:03.329 09:40:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:03.329 09:40:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:03.587 09:40:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:03:03.587 09:40:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:03:03.587 09:40:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:03.587 09:40:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:03.587 09:40:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:03.587 09:40:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.587 09:40:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:03.587 09:40:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.587 09:40:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.587 09:40:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.587 09:40:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.587 09:40:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.587 09:40:52 -- paths/export.sh@5 -- # export PATH 00:03:03.587 09:40:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.587 09:40:52 -- nvmf/common.sh@46 -- # : 0 00:03:03.587 09:40:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:03.587 09:40:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:03.587 09:40:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:03.587 09:40:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:03.587 09:40:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:03.587 09:40:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:03.587 09:40:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:03.587 09:40:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:03.587 09:40:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:03.587 09:40:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:03.587 09:40:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:03.587 09:40:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:03.587 09:40:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.587 09:40:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:03.587 09:40:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.587 09:40:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:03.587 09:40:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:03.587 09:40:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:03.587 09:40:52 -- spdk/autotest.sh@48 -- # udevadm_pid=48141 00:03:03.587 09:40:52 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.587 09:40:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:03.587 09:40:52 -- spdk/autotest.sh@54 -- # echo 48164 00:03:03.587 09:40:52 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.587 09:40:52 -- spdk/autotest.sh@56 -- # echo 48170 00:03:03.587 09:40:52 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:03.587 09:40:52 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:03.587 09:40:52 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.587 09:40:52 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:03.587 09:40:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:03.587 09:40:52 -- common/autotest_common.sh@10 -- # set +x 00:03:03.587 09:40:52 -- spdk/autotest.sh@70 -- # create_test_list 00:03:03.587 09:40:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:03.587 09:40:52 -- common/autotest_common.sh@10 -- # set +x 00:03:03.587 09:40:52 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:03.587 09:40:52 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:03.587 09:40:52 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:03.587 09:40:52 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.587 09:40:52 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:03.587 09:40:52 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:03.587 09:40:52 -- common/autotest_common.sh@1450 -- # uname 00:03:03.587 09:40:52 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:03.587 09:40:52 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:03.587 09:40:52 -- common/autotest_common.sh@1470 -- # uname 00:03:03.587 09:40:52 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:03.587 09:40:52 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:03.588 09:40:52 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:03.588 lcov: LCOV version 1.15 00:03:03.588 09:40:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:11.706 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:11.706 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:11.706 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:11.706 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:11.706 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:11.706 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:33.656 09:41:19 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:33.656 09:41:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.656 09:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:33.656 09:41:19 -- spdk/autotest.sh@89 -- # rm -f 00:03:33.656 09:41:19 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.656 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:33.656 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:33.656 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:33.656 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:33.656 09:41:20 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:33.656 09:41:20 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:33.656 09:41:20 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:33.656 09:41:20 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.656 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.656 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.656 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:33.656 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:33.656 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.656 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:33.656 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:33.656 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.656 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:03:33.656 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.656 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.657 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:33.657 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:33.657 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:33.657 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.657 09:41:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:33.657 09:41:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:33.657 09:41:20 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:33.657 09:41:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:33.657 09:41:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 /dev/nvme2n1 /dev/nvme3n1 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # grep -v p 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294781 s, 35.6 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413882 s, 253 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404538 s, 259 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517828 s, 202 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050807 s, 206 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:33.657 09:41:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:33.657 09:41:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:33.657 09:41:20 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:33.657 09:41:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:33.657 No valid GPT data, bailing 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:33.657 09:41:20 -- scripts/common.sh@393 -- # pt= 00:03:33.657 09:41:20 -- scripts/common.sh@394 -- # return 1 00:03:33.657 09:41:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:33.657 1+0 records in 00:03:33.657 1+0 records out 00:03:33.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634718 s, 165 MB/s 00:03:33.657 09:41:20 -- spdk/autotest.sh@116 -- # sync 00:03:33.657 09:41:21 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.657 09:41:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.657 09:41:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:33.657 09:41:22 -- spdk/autotest.sh@122 -- # uname -s 00:03:33.657 09:41:22 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:33.657 09:41:22 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:33.657 09:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.657 09:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.657 09:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:33.657 ************************************ 00:03:33.657 START TEST setup.sh 00:03:33.657 ************************************ 00:03:33.657 09:41:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:33.919 * Looking for test storage... 00:03:33.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:33.919 09:41:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:33.919 09:41:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:33.919 09:41:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:33.919 09:41:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:33.919 09:41:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:33.919 09:41:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:33.919 09:41:22 -- scripts/common.sh@335 -- # IFS=.-: 00:03:33.919 09:41:22 -- scripts/common.sh@335 -- # read -ra ver1 00:03:33.919 09:41:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.919 09:41:22 -- scripts/common.sh@336 -- # read -ra ver2 00:03:33.919 09:41:22 -- scripts/common.sh@337 -- # local 'op=<' 00:03:33.919 09:41:22 -- scripts/common.sh@339 -- # ver1_l=2 00:03:33.919 09:41:22 -- scripts/common.sh@340 -- # ver2_l=1 00:03:33.919 09:41:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:33.919 09:41:22 -- scripts/common.sh@343 -- # case "$op" in 00:03:33.919 09:41:22 -- scripts/common.sh@344 -- # : 1 00:03:33.919 09:41:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:33.919 09:41:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.919 09:41:22 -- scripts/common.sh@364 -- # decimal 1 00:03:33.919 09:41:22 -- scripts/common.sh@352 -- # local d=1 00:03:33.919 09:41:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.919 09:41:22 -- scripts/common.sh@354 -- # echo 1 00:03:33.919 09:41:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:33.919 09:41:22 -- scripts/common.sh@365 -- # decimal 2 00:03:33.919 09:41:22 -- scripts/common.sh@352 -- # local d=2 00:03:33.919 09:41:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.919 09:41:22 -- scripts/common.sh@354 -- # echo 2 00:03:33.919 09:41:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:33.919 09:41:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:33.919 09:41:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:33.919 09:41:22 -- scripts/common.sh@367 -- # return 0 00:03:33.919 09:41:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:33.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.919 --rc genhtml_branch_coverage=1 00:03:33.919 --rc genhtml_function_coverage=1 00:03:33.919 --rc genhtml_legend=1 00:03:33.919 --rc geninfo_all_blocks=1 00:03:33.919 --rc geninfo_unexecuted_blocks=1 00:03:33.919 00:03:33.919 ' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:33.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.919 --rc genhtml_branch_coverage=1 00:03:33.919 --rc genhtml_function_coverage=1 00:03:33.919 --rc genhtml_legend=1 00:03:33.919 --rc geninfo_all_blocks=1 00:03:33.919 --rc geninfo_unexecuted_blocks=1 00:03:33.919 00:03:33.919 ' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:33.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.919 --rc genhtml_branch_coverage=1 00:03:33.919 --rc genhtml_function_coverage=1 00:03:33.919 --rc genhtml_legend=1 00:03:33.919 --rc geninfo_all_blocks=1 00:03:33.919 --rc geninfo_unexecuted_blocks=1 00:03:33.919 00:03:33.919 ' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:33.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.919 --rc genhtml_branch_coverage=1 00:03:33.919 --rc genhtml_function_coverage=1 00:03:33.919 --rc genhtml_legend=1 00:03:33.919 --rc geninfo_all_blocks=1 00:03:33.919 --rc geninfo_unexecuted_blocks=1 00:03:33.919 00:03:33.919 ' 00:03:33.919 09:41:22 -- setup/test-setup.sh@10 -- # uname -s 00:03:33.919 09:41:22 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:33.919 09:41:22 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:33.919 09:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.919 09:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.919 09:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:33.919 ************************************ 00:03:33.919 START TEST acl 00:03:33.919 ************************************ 00:03:33.919 09:41:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:33.919 * Looking for test storage... 00:03:33.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:33.919 09:41:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:33.919 09:41:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:33.919 09:41:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:34.180 09:41:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:34.180 09:41:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:34.180 09:41:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:34.180 09:41:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:34.180 09:41:22 -- scripts/common.sh@335 -- # IFS=.-: 00:03:34.180 09:41:22 -- scripts/common.sh@335 -- # read -ra ver1 00:03:34.180 09:41:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.180 09:41:22 -- scripts/common.sh@336 -- # read -ra ver2 00:03:34.180 09:41:22 -- scripts/common.sh@337 -- # local 'op=<' 00:03:34.180 09:41:22 -- scripts/common.sh@339 -- # ver1_l=2 00:03:34.180 09:41:22 -- scripts/common.sh@340 -- # ver2_l=1 00:03:34.180 09:41:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:34.180 09:41:22 -- scripts/common.sh@343 -- # case "$op" in 00:03:34.180 09:41:22 -- scripts/common.sh@344 -- # : 1 00:03:34.180 09:41:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:34.180 09:41:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.180 09:41:22 -- scripts/common.sh@364 -- # decimal 1 00:03:34.180 09:41:22 -- scripts/common.sh@352 -- # local d=1 00:03:34.180 09:41:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.180 09:41:22 -- scripts/common.sh@354 -- # echo 1 00:03:34.180 09:41:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:34.180 09:41:22 -- scripts/common.sh@365 -- # decimal 2 00:03:34.180 09:41:22 -- scripts/common.sh@352 -- # local d=2 00:03:34.180 09:41:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.180 09:41:22 -- scripts/common.sh@354 -- # echo 2 00:03:34.180 09:41:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:34.180 09:41:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:34.180 09:41:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:34.180 09:41:22 -- scripts/common.sh@367 -- # return 0 00:03:34.180 09:41:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.180 09:41:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:34.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.180 --rc genhtml_branch_coverage=1 00:03:34.180 --rc genhtml_function_coverage=1 00:03:34.180 --rc genhtml_legend=1 00:03:34.180 --rc geninfo_all_blocks=1 00:03:34.180 --rc geninfo_unexecuted_blocks=1 00:03:34.180 00:03:34.180 ' 00:03:34.180 09:41:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:34.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.180 --rc genhtml_branch_coverage=1 00:03:34.180 --rc genhtml_function_coverage=1 00:03:34.180 --rc genhtml_legend=1 00:03:34.180 --rc geninfo_all_blocks=1 00:03:34.180 --rc geninfo_unexecuted_blocks=1 00:03:34.180 00:03:34.180 ' 00:03:34.180 09:41:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:34.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.180 --rc genhtml_branch_coverage=1 00:03:34.180 --rc genhtml_function_coverage=1 00:03:34.180 --rc genhtml_legend=1 00:03:34.180 --rc geninfo_all_blocks=1 00:03:34.180 --rc geninfo_unexecuted_blocks=1 00:03:34.180 00:03:34.180 ' 00:03:34.180 09:41:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:34.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.180 --rc genhtml_branch_coverage=1 00:03:34.180 --rc genhtml_function_coverage=1 00:03:34.180 --rc genhtml_legend=1 00:03:34.180 --rc geninfo_all_blocks=1 00:03:34.180 --rc geninfo_unexecuted_blocks=1 00:03:34.180 00:03:34.180 ' 00:03:34.180 09:41:22 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:34.180 09:41:22 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:34.180 09:41:22 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:34.180 09:41:22 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2c2n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme2c2n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.180 09:41:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:34.180 09:41:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:34.180 09:41:22 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:34.181 09:41:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:34.181 09:41:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:34.181 09:41:22 -- setup/acl.sh@12 -- # devs=() 00:03:34.181 09:41:22 -- setup/acl.sh@12 -- # declare -a devs 00:03:34.181 09:41:22 -- setup/acl.sh@13 -- # drivers=() 00:03:34.181 09:41:22 -- setup/acl.sh@13 -- # declare -A drivers 00:03:34.181 09:41:22 -- setup/acl.sh@51 -- # setup reset 00:03:34.181 09:41:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.181 09:41:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.123 09:41:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:35.123 09:41:24 -- setup/acl.sh@16 -- # local dev driver 00:03:35.123 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.123 09:41:24 -- setup/acl.sh@15 -- # setup output status 00:03:35.123 09:41:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.123 09:41:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:35.384 Hugepages 00:03:35.384 node hugesize free / total 00:03:35.384 09:41:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:35.384 09:41:24 -- setup/acl.sh@19 -- # continue 00:03:35.384 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.384 00:03:35.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.384 09:41:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:35.384 09:41:24 -- setup/acl.sh@19 -- # continue 00:03:35.384 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.384 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:35.384 09:41:24 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:35.384 09:41:24 -- setup/acl.sh@20 -- # continue 00:03:35.384 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.644 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.644 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.644 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.644 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.644 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.644 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.644 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.644 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.644 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.644 09:41:24 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:35.644 09:41:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:35.644 09:41:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:35.644 09:41:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.644 09:41:24 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:35.644 09:41:24 -- setup/acl.sh@54 -- # run_test denied denied 00:03:35.644 09:41:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.644 09:41:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.644 09:41:24 -- common/autotest_common.sh@10 -- # set +x 00:03:35.644 ************************************ 00:03:35.644 START TEST denied 00:03:35.644 ************************************ 00:03:35.644 09:41:24 -- common/autotest_common.sh@1114 -- # denied 00:03:35.644 09:41:24 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:35.644 09:41:24 -- setup/acl.sh@38 -- # setup output config 00:03:35.644 09:41:24 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:35.644 09:41:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.644 09:41:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:37.032 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:37.032 09:41:25 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:37.032 09:41:25 -- setup/acl.sh@28 -- # local dev driver 00:03:37.032 09:41:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.032 09:41:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:37.032 09:41:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:37.032 09:41:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.032 09:41:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.032 09:41:25 -- setup/acl.sh@41 -- # setup reset 00:03:37.032 09:41:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.032 09:41:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.621 00:03:43.621 real 0m7.183s 00:03:43.621 user 0m0.729s 00:03:43.621 sys 0m1.264s 00:03:43.621 09:41:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.621 ************************************ 00:03:43.621 END TEST denied 00:03:43.621 ************************************ 00:03:43.621 09:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.621 09:41:31 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:43.621 09:41:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.621 09:41:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.621 09:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.621 ************************************ 00:03:43.622 START TEST allowed 00:03:43.622 ************************************ 00:03:43.622 09:41:31 -- common/autotest_common.sh@1114 -- # allowed 00:03:43.622 09:41:31 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:43.622 09:41:31 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:43.622 09:41:31 -- setup/acl.sh@45 -- # setup output config 00:03:43.622 09:41:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.622 09:41:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.193 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.193 09:41:32 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:44.193 09:41:32 -- setup/acl.sh@28 -- # local dev driver 00:03:44.193 09:41:32 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.193 09:41:32 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:44.193 09:41:32 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:44.193 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.193 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.193 09:41:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.194 09:41:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:44.194 09:41:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:44.194 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.194 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.194 09:41:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.194 09:41:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:44.194 09:41:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:44.194 09:41:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.194 09:41:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.194 09:41:33 -- setup/acl.sh@48 -- # setup reset 00:03:44.194 09:41:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.194 09:41:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.138 ************************************ 00:03:45.138 END TEST allowed 00:03:45.138 ************************************ 00:03:45.138 00:03:45.138 real 0m2.203s 00:03:45.138 user 0m0.828s 00:03:45.138 sys 0m1.085s 00:03:45.138 09:41:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.138 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.138 ************************************ 00:03:45.138 END TEST acl 00:03:45.138 ************************************ 00:03:45.138 00:03:45.138 real 0m11.309s 00:03:45.138 user 0m2.276s 00:03:45.138 sys 0m3.379s 00:03:45.138 09:41:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.138 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.400 09:41:34 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.400 09:41:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.400 09:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.400 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.400 ************************************ 00:03:45.400 START TEST hugepages 00:03:45.400 ************************************ 00:03:45.400 09:41:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.400 * Looking for test storage... 00:03:45.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.400 09:41:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:45.400 09:41:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:45.400 09:41:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:45.400 09:41:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:45.400 09:41:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:45.400 09:41:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:45.400 09:41:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:45.400 09:41:34 -- scripts/common.sh@335 -- # IFS=.-: 00:03:45.400 09:41:34 -- scripts/common.sh@335 -- # read -ra ver1 00:03:45.400 09:41:34 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.400 09:41:34 -- scripts/common.sh@336 -- # read -ra ver2 00:03:45.400 09:41:34 -- scripts/common.sh@337 -- # local 'op=<' 00:03:45.400 09:41:34 -- scripts/common.sh@339 -- # ver1_l=2 00:03:45.400 09:41:34 -- scripts/common.sh@340 -- # ver2_l=1 00:03:45.400 09:41:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:45.400 09:41:34 -- scripts/common.sh@343 -- # case "$op" in 00:03:45.400 09:41:34 -- scripts/common.sh@344 -- # : 1 00:03:45.400 09:41:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:45.400 09:41:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.400 09:41:34 -- scripts/common.sh@364 -- # decimal 1 00:03:45.400 09:41:34 -- scripts/common.sh@352 -- # local d=1 00:03:45.400 09:41:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.400 09:41:34 -- scripts/common.sh@354 -- # echo 1 00:03:45.400 09:41:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:45.400 09:41:34 -- scripts/common.sh@365 -- # decimal 2 00:03:45.400 09:41:34 -- scripts/common.sh@352 -- # local d=2 00:03:45.400 09:41:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.400 09:41:34 -- scripts/common.sh@354 -- # echo 2 00:03:45.400 09:41:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:45.400 09:41:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:45.400 09:41:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:45.400 09:41:34 -- scripts/common.sh@367 -- # return 0 00:03:45.400 09:41:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.400 09:41:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.400 --rc genhtml_branch_coverage=1 00:03:45.400 --rc genhtml_function_coverage=1 00:03:45.400 --rc genhtml_legend=1 00:03:45.400 --rc geninfo_all_blocks=1 00:03:45.400 --rc geninfo_unexecuted_blocks=1 00:03:45.400 00:03:45.400 ' 00:03:45.401 09:41:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.401 --rc genhtml_branch_coverage=1 00:03:45.401 --rc genhtml_function_coverage=1 00:03:45.401 --rc genhtml_legend=1 00:03:45.401 --rc geninfo_all_blocks=1 00:03:45.401 --rc geninfo_unexecuted_blocks=1 00:03:45.401 00:03:45.401 ' 00:03:45.401 09:41:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.401 --rc genhtml_branch_coverage=1 00:03:45.401 --rc genhtml_function_coverage=1 00:03:45.401 --rc genhtml_legend=1 00:03:45.401 --rc geninfo_all_blocks=1 00:03:45.401 --rc geninfo_unexecuted_blocks=1 00:03:45.401 00:03:45.401 ' 00:03:45.401 09:41:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.401 --rc genhtml_branch_coverage=1 00:03:45.401 --rc genhtml_function_coverage=1 00:03:45.401 --rc genhtml_legend=1 00:03:45.401 --rc geninfo_all_blocks=1 00:03:45.401 --rc geninfo_unexecuted_blocks=1 00:03:45.401 00:03:45.401 ' 00:03:45.401 09:41:34 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.401 09:41:34 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.401 09:41:34 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.401 09:41:34 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.401 09:41:34 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.401 09:41:34 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.401 09:41:34 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.401 09:41:34 -- setup/common.sh@18 -- # local node= 00:03:45.401 09:41:34 -- setup/common.sh@19 -- # local var val 00:03:45.401 09:41:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.401 09:41:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.401 09:41:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.401 09:41:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.401 09:41:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.401 09:41:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 5801484 kB' 'MemAvailable: 7357688 kB' 'Buffers: 2684 kB' 'Cached: 1769020 kB' 'SwapCached: 0 kB' 'Active: 465024 kB' 'Inactive: 1421948 kB' 'Active(anon): 125800 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 116988 kB' 'Mapped: 50968 kB' 'Shmem: 10532 kB' 'KReclaimable: 63920 kB' 'Slab: 162160 kB' 'SReclaimable: 63920 kB' 'SUnreclaim: 98240 kB' 'KernelStack: 6544 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 310104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 09:41:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 09:41:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # continue 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 09:41:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 09:41:34 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.403 09:41:34 -- setup/common.sh@33 -- # echo 2048 00:03:45.403 09:41:34 -- setup/common.sh@33 -- # return 0 00:03:45.403 09:41:34 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.403 09:41:34 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.403 09:41:34 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.403 09:41:34 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.403 09:41:34 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.403 09:41:34 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.403 09:41:34 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.403 09:41:34 -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.403 09:41:34 -- setup/hugepages.sh@27 -- # local node 00:03:45.403 09:41:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.403 09:41:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.403 09:41:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.403 09:41:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.403 09:41:34 -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.403 09:41:34 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.403 09:41:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.403 09:41:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.403 09:41:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.403 09:41:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.403 09:41:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.403 09:41:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.403 09:41:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.403 09:41:34 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.403 09:41:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.403 09:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.403 09:41:34 -- common/autotest_common.sh@10 -- # set +x 00:03:45.664 ************************************ 00:03:45.664 START TEST default_setup 00:03:45.664 ************************************ 00:03:45.664 09:41:34 -- common/autotest_common.sh@1114 -- # default_setup 00:03:45.664 09:41:34 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.664 09:41:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.664 09:41:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.664 09:41:34 -- setup/hugepages.sh@51 -- # shift 00:03:45.664 09:41:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.664 09:41:34 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.664 09:41:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.664 09:41:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.664 09:41:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.664 09:41:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.664 09:41:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.664 09:41:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.664 09:41:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.664 09:41:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.664 09:41:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.664 09:41:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.664 09:41:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.664 09:41:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.664 09:41:34 -- setup/hugepages.sh@73 -- # return 0 00:03:45.664 09:41:34 -- setup/hugepages.sh@137 -- # setup output 00:03:45.664 09:41:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.664 09:41:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.607 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.607 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.607 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.607 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.871 09:41:35 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.871 09:41:35 -- setup/hugepages.sh@89 -- # local node 00:03:46.871 09:41:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.871 09:41:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.871 09:41:35 -- setup/hugepages.sh@92 -- # local surp 00:03:46.871 09:41:35 -- setup/hugepages.sh@93 -- # local resv 00:03:46.871 09:41:35 -- setup/hugepages.sh@94 -- # local anon 00:03:46.871 09:41:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.871 09:41:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.871 09:41:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.871 09:41:35 -- setup/common.sh@18 -- # local node= 00:03:46.871 09:41:35 -- setup/common.sh@19 -- # local var val 00:03:46.871 09:41:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.871 09:41:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.871 09:41:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.871 09:41:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.871 09:41:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.871 09:41:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7924624 kB' 'MemAvailable: 9480636 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466980 kB' 'Inactive: 1421968 kB' 'Active(anon): 127756 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118864 kB' 'Mapped: 50868 kB' 'Shmem: 10492 kB' 'KReclaimable: 63492 kB' 'Slab: 161900 kB' 'SReclaimable: 63492 kB' 'SUnreclaim: 98408 kB' 'KernelStack: 6528 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.871 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.871 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.872 09:41:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.872 09:41:35 -- setup/common.sh@33 -- # echo 0 00:03:46.872 09:41:35 -- setup/common.sh@33 -- # return 0 00:03:46.872 09:41:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.872 09:41:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.872 09:41:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.872 09:41:35 -- setup/common.sh@18 -- # local node= 00:03:46.872 09:41:35 -- setup/common.sh@19 -- # local var val 00:03:46.872 09:41:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.872 09:41:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.872 09:41:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.872 09:41:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.872 09:41:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.872 09:41:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.872 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7924624 kB' 'MemAvailable: 9480636 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466500 kB' 'Inactive: 1421968 kB' 'Active(anon): 127276 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118640 kB' 'Mapped: 50868 kB' 'Shmem: 10492 kB' 'KReclaimable: 63492 kB' 'Slab: 161888 kB' 'SReclaimable: 63492 kB' 'SUnreclaim: 98396 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.873 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.873 09:41:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.874 09:41:35 -- setup/common.sh@33 -- # echo 0 00:03:46.874 09:41:35 -- setup/common.sh@33 -- # return 0 00:03:46.874 09:41:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.874 09:41:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.874 09:41:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.874 09:41:35 -- setup/common.sh@18 -- # local node= 00:03:46.874 09:41:35 -- setup/common.sh@19 -- # local var val 00:03:46.874 09:41:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.874 09:41:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.874 09:41:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.874 09:41:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.874 09:41:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.874 09:41:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7924624 kB' 'MemAvailable: 9480640 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466188 kB' 'Inactive: 1421972 kB' 'Active(anon): 126964 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118048 kB' 'Mapped: 50796 kB' 'Shmem: 10492 kB' 'KReclaimable: 63492 kB' 'Slab: 161892 kB' 'SReclaimable: 63492 kB' 'SUnreclaim: 98400 kB' 'KernelStack: 6496 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.874 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.874 09:41:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.875 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.875 09:41:35 -- setup/common.sh@33 -- # echo 0 00:03:46.875 09:41:35 -- setup/common.sh@33 -- # return 0 00:03:46.875 nr_hugepages=1024 00:03:46.875 09:41:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.875 09:41:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.875 resv_hugepages=0 00:03:46.875 surplus_hugepages=0 00:03:46.875 anon_hugepages=0 00:03:46.875 09:41:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.875 09:41:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.875 09:41:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.875 09:41:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.875 09:41:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.875 09:41:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.875 09:41:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.875 09:41:35 -- setup/common.sh@18 -- # local node= 00:03:46.875 09:41:35 -- setup/common.sh@19 -- # local var val 00:03:46.875 09:41:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.875 09:41:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.875 09:41:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.875 09:41:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.875 09:41:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.875 09:41:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.875 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.875 09:41:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7924624 kB' 'MemAvailable: 9480640 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466448 kB' 'Inactive: 1421972 kB' 'Active(anon): 127224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118308 kB' 'Mapped: 50796 kB' 'Shmem: 10492 kB' 'KReclaimable: 63492 kB' 'Slab: 161892 kB' 'SReclaimable: 63492 kB' 'SUnreclaim: 98400 kB' 'KernelStack: 6496 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.876 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.876 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.877 09:41:35 -- setup/common.sh@33 -- # echo 1024 00:03:46.877 09:41:35 -- setup/common.sh@33 -- # return 0 00:03:46.877 09:41:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.877 09:41:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.877 09:41:35 -- setup/hugepages.sh@27 -- # local node 00:03:46.877 09:41:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.877 09:41:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.877 09:41:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.877 09:41:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.877 09:41:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.877 09:41:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.877 09:41:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.877 09:41:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.877 09:41:35 -- setup/common.sh@18 -- # local node=0 00:03:46.877 09:41:35 -- setup/common.sh@19 -- # local var val 00:03:46.877 09:41:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.877 09:41:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.877 09:41:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.877 09:41:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.877 09:41:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.877 09:41:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7924624 kB' 'MemUsed: 4312476 kB' 'SwapCached: 0 kB' 'Active: 466364 kB' 'Inactive: 1421972 kB' 'Active(anon): 127140 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 50796 kB' 'AnonPages: 118168 kB' 'Shmem: 10492 kB' 'KernelStack: 6448 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63492 kB' 'Slab: 161888 kB' 'SReclaimable: 63492 kB' 'SUnreclaim: 98396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.877 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.877 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # continue 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.878 09:41:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.878 09:41:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.878 09:41:35 -- setup/common.sh@33 -- # echo 0 00:03:46.878 09:41:35 -- setup/common.sh@33 -- # return 0 00:03:46.878 09:41:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.878 09:41:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.878 09:41:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.878 node0=1024 expecting 1024 00:03:46.878 ************************************ 00:03:46.878 END TEST default_setup 00:03:46.878 ************************************ 00:03:46.878 09:41:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.878 09:41:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.878 09:41:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.878 00:03:46.878 real 0m1.350s 00:03:46.878 user 0m0.487s 00:03:46.878 sys 0m0.689s 00:03:46.878 09:41:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.878 09:41:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 09:41:35 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.878 09:41:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.878 09:41:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.878 09:41:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.878 ************************************ 00:03:46.878 START TEST per_node_1G_alloc 00:03:46.878 ************************************ 00:03:46.878 09:41:35 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:46.878 09:41:35 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.878 09:41:35 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:46.878 09:41:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.878 09:41:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.878 09:41:35 -- setup/hugepages.sh@51 -- # shift 00:03:46.878 09:41:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.878 09:41:35 -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.878 09:41:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.878 09:41:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.878 09:41:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.878 09:41:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.878 09:41:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.878 09:41:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.878 09:41:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.878 09:41:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.878 09:41:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.878 09:41:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.878 09:41:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.878 09:41:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.878 09:41:35 -- setup/hugepages.sh@73 -- # return 0 00:03:46.878 09:41:35 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.878 09:41:35 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:46.878 09:41:35 -- setup/hugepages.sh@146 -- # setup output 00:03:46.878 09:41:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.878 09:41:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.455 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.455 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.455 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.455 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.455 09:41:36 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:47.455 09:41:36 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.455 09:41:36 -- setup/hugepages.sh@89 -- # local node 00:03:47.455 09:41:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.455 09:41:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.455 09:41:36 -- setup/hugepages.sh@92 -- # local surp 00:03:47.455 09:41:36 -- setup/hugepages.sh@93 -- # local resv 00:03:47.455 09:41:36 -- setup/hugepages.sh@94 -- # local anon 00:03:47.455 09:41:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.455 09:41:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.455 09:41:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.455 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:47.455 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:47.455 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.455 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.455 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.455 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.455 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.455 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.455 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.455 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8972360 kB' 'MemAvailable: 10528364 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 467088 kB' 'Inactive: 1421976 kB' 'Active(anon): 127864 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118932 kB' 'Mapped: 50872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161944 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98484 kB' 'KernelStack: 6580 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.456 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.457 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:47.457 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:47.457 09:41:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.457 09:41:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.457 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.457 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:47.457 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:47.457 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.457 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.457 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.457 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.457 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.457 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8972240 kB' 'MemAvailable: 10528244 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 467032 kB' 'Inactive: 1421976 kB' 'Active(anon): 127808 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118876 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161940 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98480 kB' 'KernelStack: 6548 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.457 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.458 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:47.458 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:47.458 09:41:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.458 09:41:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.458 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.458 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:47.458 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:47.458 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.458 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.458 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.458 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.458 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.458 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8972240 kB' 'MemAvailable: 10528244 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466496 kB' 'Inactive: 1421976 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161960 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98500 kB' 'KernelStack: 6480 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.458 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.458 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.459 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.459 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.460 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:47.460 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:47.460 09:41:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.460 nr_hugepages=512 00:03:47.460 resv_hugepages=0 00:03:47.460 surplus_hugepages=0 00:03:47.460 anon_hugepages=0 00:03:47.460 09:41:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.460 09:41:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.460 09:41:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.460 09:41:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.460 09:41:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.460 09:41:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.460 09:41:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.460 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.460 09:41:36 -- setup/common.sh@18 -- # local node= 00:03:47.460 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:47.460 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.460 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.460 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.460 09:41:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.460 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.460 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8972240 kB' 'MemAvailable: 10528244 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466412 kB' 'Inactive: 1421976 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118232 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161960 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98500 kB' 'KernelStack: 6448 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.460 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.460 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.461 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.461 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.461 09:41:36 -- setup/common.sh@33 -- # echo 512 00:03:47.461 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:47.461 09:41:36 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.461 09:41:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.461 09:41:36 -- setup/hugepages.sh@27 -- # local node 00:03:47.461 09:41:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.461 09:41:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.461 09:41:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.461 09:41:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.461 09:41:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.461 09:41:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.461 09:41:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.461 09:41:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.461 09:41:36 -- setup/common.sh@18 -- # local node=0 00:03:47.461 09:41:36 -- setup/common.sh@19 -- # local var val 00:03:47.462 09:41:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.462 09:41:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.462 09:41:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.462 09:41:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.462 09:41:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.462 09:41:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8972240 kB' 'MemUsed: 3264860 kB' 'SwapCached: 0 kB' 'Active: 466500 kB' 'Inactive: 1421976 kB' 'Active(anon): 127276 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 50744 kB' 'AnonPages: 118320 kB' 'Shmem: 10492 kB' 'KernelStack: 6532 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63460 kB' 'Slab: 161960 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.462 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.462 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 09:41:36 -- setup/common.sh@32 -- # continue 00:03:47.463 09:41:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.463 09:41:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.463 09:41:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.463 09:41:36 -- setup/common.sh@33 -- # echo 0 00:03:47.463 09:41:36 -- setup/common.sh@33 -- # return 0 00:03:47.463 09:41:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.463 09:41:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.463 09:41:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.463 09:41:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.463 node0=512 expecting 512 00:03:47.463 09:41:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.463 09:41:36 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.463 00:03:47.463 real 0m0.601s 00:03:47.463 user 0m0.252s 00:03:47.463 sys 0m0.361s 00:03:47.463 09:41:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.463 ************************************ 00:03:47.463 09:41:36 -- common/autotest_common.sh@10 -- # set +x 00:03:47.463 END TEST per_node_1G_alloc 00:03:47.463 ************************************ 00:03:47.725 09:41:36 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:47.725 09:41:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.725 09:41:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.725 09:41:36 -- common/autotest_common.sh@10 -- # set +x 00:03:47.725 ************************************ 00:03:47.725 START TEST even_2G_alloc 00:03:47.725 ************************************ 00:03:47.725 09:41:36 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:47.725 09:41:36 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:47.725 09:41:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.725 09:41:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.725 09:41:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.725 09:41:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.725 09:41:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.725 09:41:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.725 09:41:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.725 09:41:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.725 09:41:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.725 09:41:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:47.725 09:41:36 -- setup/hugepages.sh@83 -- # : 0 00:03:47.725 09:41:36 -- setup/hugepages.sh@84 -- # : 0 00:03:47.725 09:41:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.725 09:41:36 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:47.725 09:41:36 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:47.725 09:41:36 -- setup/hugepages.sh@153 -- # setup output 00:03:47.725 09:41:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.725 09:41:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.985 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.985 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.985 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.985 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.250 09:41:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:48.250 09:41:37 -- setup/hugepages.sh@89 -- # local node 00:03:48.250 09:41:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.250 09:41:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.250 09:41:37 -- setup/hugepages.sh@92 -- # local surp 00:03:48.250 09:41:37 -- setup/hugepages.sh@93 -- # local resv 00:03:48.250 09:41:37 -- setup/hugepages.sh@94 -- # local anon 00:03:48.250 09:41:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.250 09:41:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.250 09:41:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.250 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.250 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.250 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.250 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.250 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.250 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.250 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.250 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7916160 kB' 'MemAvailable: 9472164 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466900 kB' 'Inactive: 1421976 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 50752 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161968 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98508 kB' 'KernelStack: 6524 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.250 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.250 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.251 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.251 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.251 09:41:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.251 09:41:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.251 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.251 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.251 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.251 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.251 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.251 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.251 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.251 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.251 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7916160 kB' 'MemAvailable: 9472164 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466552 kB' 'Inactive: 1421976 kB' 'Active(anon): 127328 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118448 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161952 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98492 kB' 'KernelStack: 6512 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.251 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.251 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.252 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.252 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.252 09:41:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.252 09:41:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.252 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.252 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.252 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.252 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.252 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.252 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.252 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.252 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.252 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7916160 kB' 'MemAvailable: 9472164 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466556 kB' 'Inactive: 1421976 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118452 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161952 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98492 kB' 'KernelStack: 6512 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.252 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.252 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.253 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.253 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.254 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.254 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.254 nr_hugepages=1024 00:03:48.254 resv_hugepages=0 00:03:48.254 surplus_hugepages=0 00:03:48.254 anon_hugepages=0 00:03:48.254 09:41:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.254 09:41:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.254 09:41:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.254 09:41:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.254 09:41:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.254 09:41:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.254 09:41:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.254 09:41:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.254 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.254 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.254 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.254 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.254 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.254 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.254 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.254 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.254 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7916160 kB' 'MemAvailable: 9472164 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466524 kB' 'Inactive: 1421976 kB' 'Active(anon): 127300 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118416 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161944 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98484 kB' 'KernelStack: 6496 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.254 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.254 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.255 09:41:37 -- setup/common.sh@33 -- # echo 1024 00:03:48.255 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.255 09:41:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.255 09:41:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.255 09:41:37 -- setup/hugepages.sh@27 -- # local node 00:03:48.255 09:41:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.255 09:41:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.255 09:41:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.255 09:41:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.255 09:41:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.255 09:41:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.255 09:41:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.255 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.255 09:41:37 -- setup/common.sh@18 -- # local node=0 00:03:48.255 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.255 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.255 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.255 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.255 09:41:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.255 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.255 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7915908 kB' 'MemUsed: 4321192 kB' 'SwapCached: 0 kB' 'Active: 466560 kB' 'Inactive: 1421976 kB' 'Active(anon): 127336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 50744 kB' 'AnonPages: 118460 kB' 'Shmem: 10492 kB' 'KernelStack: 6512 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63460 kB' 'Slab: 161944 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.255 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.255 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.256 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.256 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.256 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.256 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.256 09:41:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.256 09:41:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.256 node0=1024 expecting 1024 00:03:48.256 ************************************ 00:03:48.256 END TEST even_2G_alloc 00:03:48.256 ************************************ 00:03:48.256 09:41:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.256 09:41:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.256 09:41:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.256 09:41:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.256 00:03:48.256 real 0m0.637s 00:03:48.256 user 0m0.253s 00:03:48.256 sys 0m0.387s 00:03:48.256 09:41:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.256 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.256 09:41:37 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:48.256 09:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.256 09:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.256 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:48.256 ************************************ 00:03:48.256 START TEST odd_alloc 00:03:48.256 ************************************ 00:03:48.256 09:41:37 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:48.256 09:41:37 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:48.256 09:41:37 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:48.256 09:41:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.256 09:41:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.256 09:41:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:48.256 09:41:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.256 09:41:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.256 09:41:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.256 09:41:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:48.256 09:41:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.256 09:41:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.256 09:41:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.257 09:41:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.257 09:41:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.257 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.257 09:41:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:48.257 09:41:37 -- setup/hugepages.sh@83 -- # : 0 00:03:48.257 09:41:37 -- setup/hugepages.sh@84 -- # : 0 00:03:48.257 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.257 09:41:37 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:48.257 09:41:37 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:48.257 09:41:37 -- setup/hugepages.sh@160 -- # setup output 00:03:48.257 09:41:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.257 09:41:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.831 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.831 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.831 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.831 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.831 09:41:37 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.831 09:41:37 -- setup/hugepages.sh@89 -- # local node 00:03:48.831 09:41:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.831 09:41:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.831 09:41:37 -- setup/hugepages.sh@92 -- # local surp 00:03:48.831 09:41:37 -- setup/hugepages.sh@93 -- # local resv 00:03:48.831 09:41:37 -- setup/hugepages.sh@94 -- # local anon 00:03:48.831 09:41:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.831 09:41:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.831 09:41:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.831 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.831 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.831 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.831 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.831 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.831 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.831 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.831 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.831 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7910656 kB' 'MemAvailable: 9466660 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466556 kB' 'Inactive: 1421976 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118388 kB' 'Mapped: 50864 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161868 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98408 kB' 'KernelStack: 6540 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.831 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.831 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.832 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.832 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.832 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.832 09:41:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.832 09:41:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.832 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.832 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.832 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.832 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.832 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.832 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.832 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.832 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.832 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.832 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7910656 kB' 'MemAvailable: 9466660 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466640 kB' 'Inactive: 1421976 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118488 kB' 'Mapped: 50864 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161868 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98408 kB' 'KernelStack: 6524 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.833 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.833 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.834 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.834 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.834 09:41:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.834 09:41:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.834 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.834 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.834 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.834 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.834 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.834 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.834 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.834 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.834 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7910656 kB' 'MemAvailable: 9466660 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466744 kB' 'Inactive: 1421976 kB' 'Active(anon): 127520 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118564 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161864 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98404 kB' 'KernelStack: 6496 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.834 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.834 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.835 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.835 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.835 nr_hugepages=1025 00:03:48.835 resv_hugepages=0 00:03:48.835 surplus_hugepages=0 00:03:48.835 anon_hugepages=0 00:03:48.835 09:41:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.835 09:41:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.835 09:41:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.835 09:41:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.835 09:41:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.835 09:41:37 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.835 09:41:37 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.835 09:41:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.835 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.835 09:41:37 -- setup/common.sh@18 -- # local node= 00:03:48.835 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.835 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.835 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.835 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.835 09:41:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.835 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.835 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.835 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7910656 kB' 'MemAvailable: 9466660 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466564 kB' 'Inactive: 1421976 kB' 'Active(anon): 127340 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118392 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161864 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98404 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.835 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.835 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.836 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.836 09:41:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.837 09:41:37 -- setup/common.sh@33 -- # echo 1025 00:03:48.837 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.837 09:41:37 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.837 09:41:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.837 09:41:37 -- setup/hugepages.sh@27 -- # local node 00:03:48.837 09:41:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.837 09:41:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:48.837 09:41:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.837 09:41:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.837 09:41:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.837 09:41:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.837 09:41:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.837 09:41:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.837 09:41:37 -- setup/common.sh@18 -- # local node=0 00:03:48.837 09:41:37 -- setup/common.sh@19 -- # local var val 00:03:48.837 09:41:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.837 09:41:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.837 09:41:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.837 09:41:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.837 09:41:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.837 09:41:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7910656 kB' 'MemUsed: 4326444 kB' 'SwapCached: 0 kB' 'Active: 466180 kB' 'Inactive: 1421976 kB' 'Active(anon): 126956 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 50744 kB' 'AnonPages: 118056 kB' 'Shmem: 10492 kB' 'KernelStack: 6516 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63460 kB' 'Slab: 161864 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.837 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.837 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # continue 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.838 09:41:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.838 09:41:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.838 09:41:37 -- setup/common.sh@33 -- # echo 0 00:03:48.838 09:41:37 -- setup/common.sh@33 -- # return 0 00:03:48.838 09:41:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.838 09:41:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.838 09:41:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.838 node0=1025 expecting 1025 00:03:48.838 ************************************ 00:03:48.838 END TEST odd_alloc 00:03:48.838 ************************************ 00:03:48.838 09:41:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.838 09:41:37 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:48.838 09:41:37 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:48.838 00:03:48.838 real 0m0.614s 00:03:48.838 user 0m0.254s 00:03:48.838 sys 0m0.364s 00:03:48.838 09:41:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.838 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:49.099 09:41:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:49.099 09:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.100 09:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.100 09:41:37 -- common/autotest_common.sh@10 -- # set +x 00:03:49.100 ************************************ 00:03:49.100 START TEST custom_alloc 00:03:49.100 ************************************ 00:03:49.100 09:41:37 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:49.100 09:41:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:49.100 09:41:37 -- setup/hugepages.sh@169 -- # local node 00:03:49.100 09:41:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:49.100 09:41:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:49.100 09:41:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:49.100 09:41:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:49.100 09:41:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:49.100 09:41:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.100 09:41:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.100 09:41:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.100 09:41:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:49.100 09:41:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.100 09:41:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.100 09:41:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@83 -- # : 0 00:03:49.100 09:41:37 -- setup/hugepages.sh@84 -- # : 0 00:03:49.100 09:41:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:49.100 09:41:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:49.100 09:41:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:49.100 09:41:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.100 09:41:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.100 09:41:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:49.100 09:41:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.100 09:41:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.100 09:41:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:49.100 09:41:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:49.100 09:41:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:49.100 09:41:37 -- setup/hugepages.sh@78 -- # return 0 00:03:49.100 09:41:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:49.100 09:41:37 -- setup/hugepages.sh@187 -- # setup output 00:03:49.100 09:41:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.100 09:41:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.362 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.362 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.362 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.362 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.627 09:41:38 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:49.627 09:41:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:49.627 09:41:38 -- setup/hugepages.sh@89 -- # local node 00:03:49.627 09:41:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.627 09:41:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.627 09:41:38 -- setup/hugepages.sh@92 -- # local surp 00:03:49.628 09:41:38 -- setup/hugepages.sh@93 -- # local resv 00:03:49.628 09:41:38 -- setup/hugepages.sh@94 -- # local anon 00:03:49.628 09:41:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.628 09:41:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.628 09:41:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.628 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:49.628 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:49.628 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.628 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.628 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.628 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.628 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.628 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8963352 kB' 'MemAvailable: 10519356 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466916 kB' 'Inactive: 1421976 kB' 'Active(anon): 127692 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 50944 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161844 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98384 kB' 'KernelStack: 6472 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.628 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.628 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.629 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:49.629 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:49.629 09:41:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.629 09:41:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.629 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.629 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:49.629 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:49.629 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.629 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.629 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.629 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.629 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.629 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8963608 kB' 'MemAvailable: 10519612 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466760 kB' 'Inactive: 1421976 kB' 'Active(anon): 127536 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161916 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98456 kB' 'KernelStack: 6496 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.629 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.629 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.630 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:49.630 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:49.630 09:41:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.630 09:41:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.630 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.630 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:49.630 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:49.630 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.630 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.630 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.630 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.630 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.630 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8963608 kB' 'MemAvailable: 10519612 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466724 kB' 'Inactive: 1421976 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161908 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98448 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.630 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.630 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.631 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.631 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.632 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:49.632 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:49.632 nr_hugepages=512 00:03:49.632 resv_hugepages=0 00:03:49.632 surplus_hugepages=0 00:03:49.632 anon_hugepages=0 00:03:49.632 09:41:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.632 09:41:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:49.632 09:41:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.632 09:41:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.632 09:41:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.632 09:41:38 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:49.632 09:41:38 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:49.632 09:41:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.632 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.632 09:41:38 -- setup/common.sh@18 -- # local node= 00:03:49.632 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:49.632 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.632 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.632 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.632 09:41:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.632 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.632 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8963608 kB' 'MemAvailable: 10519612 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 466748 kB' 'Inactive: 1421976 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118568 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63460 kB' 'Slab: 161904 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98444 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.632 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.632 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.633 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.633 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.633 09:41:38 -- setup/common.sh@33 -- # echo 512 00:03:49.633 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:49.633 09:41:38 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:49.633 09:41:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.633 09:41:38 -- setup/hugepages.sh@27 -- # local node 00:03:49.633 09:41:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.633 09:41:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.633 09:41:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.633 09:41:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.633 09:41:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.633 09:41:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.633 09:41:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.633 09:41:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.633 09:41:38 -- setup/common.sh@18 -- # local node=0 00:03:49.633 09:41:38 -- setup/common.sh@19 -- # local var val 00:03:49.633 09:41:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.633 09:41:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.633 09:41:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.633 09:41:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.634 09:41:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.634 09:41:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 8963608 kB' 'MemUsed: 3273492 kB' 'SwapCached: 0 kB' 'Active: 466732 kB' 'Inactive: 1421976 kB' 'Active(anon): 127508 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 50744 kB' 'AnonPages: 118588 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63460 kB' 'Slab: 161892 kB' 'SReclaimable: 63460 kB' 'SUnreclaim: 98432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.634 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.634 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.635 09:41:38 -- setup/common.sh@32 -- # continue 00:03:49.635 09:41:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.635 09:41:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.635 09:41:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.635 09:41:38 -- setup/common.sh@33 -- # echo 0 00:03:49.635 09:41:38 -- setup/common.sh@33 -- # return 0 00:03:49.635 09:41:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.635 09:41:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.635 node0=512 expecting 512 00:03:49.635 09:41:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.635 09:41:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.635 09:41:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.635 09:41:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:49.635 00:03:49.635 real 0m0.627s 00:03:49.635 user 0m0.259s 00:03:49.635 sys 0m0.377s 00:03:49.635 09:41:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.635 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:49.635 ************************************ 00:03:49.635 END TEST custom_alloc 00:03:49.635 ************************************ 00:03:49.635 09:41:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:49.635 09:41:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.635 09:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.635 09:41:38 -- common/autotest_common.sh@10 -- # set +x 00:03:49.635 ************************************ 00:03:49.635 START TEST no_shrink_alloc 00:03:49.635 ************************************ 00:03:49.635 09:41:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:49.635 09:41:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:49.635 09:41:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.635 09:41:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:49.635 09:41:38 -- setup/hugepages.sh@51 -- # shift 00:03:49.635 09:41:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:49.635 09:41:38 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.635 09:41:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.635 09:41:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.635 09:41:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:49.635 09:41:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:49.635 09:41:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.635 09:41:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.635 09:41:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:49.635 09:41:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.635 09:41:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.635 09:41:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:49.635 09:41:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.635 09:41:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:49.635 09:41:38 -- setup/hugepages.sh@73 -- # return 0 00:03:49.635 09:41:38 -- setup/hugepages.sh@198 -- # setup output 00:03:49.635 09:41:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.635 09:41:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.210 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.210 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.210 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.210 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.210 09:41:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:50.210 09:41:39 -- setup/hugepages.sh@89 -- # local node 00:03:50.210 09:41:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.210 09:41:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.210 09:41:39 -- setup/hugepages.sh@92 -- # local surp 00:03:50.210 09:41:39 -- setup/hugepages.sh@93 -- # local resv 00:03:50.210 09:41:39 -- setup/hugepages.sh@94 -- # local anon 00:03:50.210 09:41:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.210 09:41:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.210 09:41:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.210 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.210 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.210 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.210 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.210 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.210 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.210 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.210 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913272 kB' 'MemAvailable: 9469264 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 465704 kB' 'Inactive: 1421976 kB' 'Active(anon): 126480 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117868 kB' 'Mapped: 50264 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161824 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98384 kB' 'KernelStack: 6492 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.210 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.210 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.211 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.211 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.212 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:50.212 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.212 09:41:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.212 09:41:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.212 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.212 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.212 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.212 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.212 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.212 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.212 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.212 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.212 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913272 kB' 'MemAvailable: 9469264 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 465360 kB' 'Inactive: 1421976 kB' 'Active(anon): 126136 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117216 kB' 'Mapped: 49896 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161820 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98380 kB' 'KernelStack: 6448 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.212 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.212 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.213 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:50.213 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.213 09:41:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.213 09:41:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.213 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.213 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.213 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.213 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.213 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.213 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.213 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.213 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.213 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913272 kB' 'MemAvailable: 9469264 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 465356 kB' 'Inactive: 1421976 kB' 'Active(anon): 126132 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117212 kB' 'Mapped: 49896 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161820 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98380 kB' 'KernelStack: 6448 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.213 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.213 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.214 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.214 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.215 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:50.215 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.215 nr_hugepages=1024 00:03:50.215 resv_hugepages=0 00:03:50.215 surplus_hugepages=0 00:03:50.215 09:41:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.215 09:41:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.215 09:41:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.215 09:41:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.215 anon_hugepages=0 00:03:50.215 09:41:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.215 09:41:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.215 09:41:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.215 09:41:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.215 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.215 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.215 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.215 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.215 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.215 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.215 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.215 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.215 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913272 kB' 'MemAvailable: 9469264 kB' 'Buffers: 2684 kB' 'Cached: 1769008 kB' 'SwapCached: 0 kB' 'Active: 465340 kB' 'Inactive: 1421976 kB' 'Active(anon): 126116 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117244 kB' 'Mapped: 49896 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161816 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98376 kB' 'KernelStack: 6432 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.215 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.215 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.216 09:41:39 -- setup/common.sh@33 -- # echo 1024 00:03:50.216 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.216 09:41:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.216 09:41:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.216 09:41:39 -- setup/hugepages.sh@27 -- # local node 00:03:50.216 09:41:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.216 09:41:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.216 09:41:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.216 09:41:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.216 09:41:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.216 09:41:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.216 09:41:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.216 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.216 09:41:39 -- setup/common.sh@18 -- # local node=0 00:03:50.216 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.216 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.216 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.216 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.216 09:41:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.216 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.216 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.216 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913272 kB' 'MemUsed: 4323828 kB' 'SwapCached: 0 kB' 'Active: 465356 kB' 'Inactive: 1421976 kB' 'Active(anon): 126132 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421976 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771692 kB' 'Mapped: 49896 kB' 'AnonPages: 117212 kB' 'Shmem: 10492 kB' 'KernelStack: 6416 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63440 kB' 'Slab: 161816 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.216 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.216 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.217 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.217 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.217 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:50.217 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.217 09:41:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.217 09:41:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.479 09:41:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.479 node0=1024 expecting 1024 00:03:50.479 09:41:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.479 09:41:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.479 09:41:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.479 09:41:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:50.479 09:41:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:50.479 09:41:39 -- setup/hugepages.sh@202 -- # setup output 00:03:50.479 09:41:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.479 09:41:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.741 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.741 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.741 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.741 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:50.741 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:50.741 09:41:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:50.741 09:41:39 -- setup/hugepages.sh@89 -- # local node 00:03:50.741 09:41:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.741 09:41:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.741 09:41:39 -- setup/hugepages.sh@92 -- # local surp 00:03:50.741 09:41:39 -- setup/hugepages.sh@93 -- # local resv 00:03:50.741 09:41:39 -- setup/hugepages.sh@94 -- # local anon 00:03:50.741 09:41:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.741 09:41:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.741 09:41:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.741 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.741 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.741 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.741 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.741 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.741 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.741 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.741 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.741 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.741 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913560 kB' 'MemAvailable: 9469556 kB' 'Buffers: 2684 kB' 'Cached: 1769012 kB' 'SwapCached: 0 kB' 'Active: 466316 kB' 'Inactive: 1421980 kB' 'Active(anon): 127092 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118176 kB' 'Mapped: 50024 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161900 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98460 kB' 'KernelStack: 6540 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.742 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.742 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # continue 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.743 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.743 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:50.743 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:50.743 09:41:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.743 09:41:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.743 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.743 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:50.743 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:50.743 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.743 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.743 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.743 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.743 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.743 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.743 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.743 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913560 kB' 'MemAvailable: 9469556 kB' 'Buffers: 2684 kB' 'Cached: 1769012 kB' 'SwapCached: 0 kB' 'Active: 465552 kB' 'Inactive: 1421980 kB' 'Active(anon): 126328 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117384 kB' 'Mapped: 50076 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161892 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98452 kB' 'KernelStack: 6480 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.007 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:51.007 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:51.007 09:41:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.007 09:41:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.007 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.007 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:51.007 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:51.007 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.007 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.007 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.007 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.007 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.007 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913560 kB' 'MemAvailable: 9469556 kB' 'Buffers: 2684 kB' 'Cached: 1769012 kB' 'SwapCached: 0 kB' 'Active: 465372 kB' 'Inactive: 1421980 kB' 'Active(anon): 126148 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117196 kB' 'Mapped: 50076 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161892 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98452 kB' 'KernelStack: 6448 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.007 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.007 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.008 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.008 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.009 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:51.009 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:51.009 nr_hugepages=1024 00:03:51.009 resv_hugepages=0 00:03:51.009 09:41:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.009 09:41:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.009 09:41:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.009 surplus_hugepages=0 00:03:51.009 09:41:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.009 anon_hugepages=0 00:03:51.009 09:41:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.009 09:41:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.009 09:41:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.009 09:41:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.009 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.009 09:41:39 -- setup/common.sh@18 -- # local node= 00:03:51.009 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:51.009 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.009 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.009 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.009 09:41:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.009 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.009 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913560 kB' 'MemAvailable: 9469556 kB' 'Buffers: 2684 kB' 'Cached: 1769012 kB' 'SwapCached: 0 kB' 'Active: 465540 kB' 'Inactive: 1421980 kB' 'Active(anon): 126316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117400 kB' 'Mapped: 49896 kB' 'Shmem: 10492 kB' 'KReclaimable: 63440 kB' 'Slab: 161892 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98452 kB' 'KernelStack: 6464 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.009 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.009 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.010 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.010 09:41:39 -- setup/common.sh@33 -- # echo 1024 00:03:51.010 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:51.010 09:41:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.010 09:41:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.010 09:41:39 -- setup/hugepages.sh@27 -- # local node 00:03:51.010 09:41:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.010 09:41:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.010 09:41:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.010 09:41:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.010 09:41:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.010 09:41:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.010 09:41:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.010 09:41:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.010 09:41:39 -- setup/common.sh@18 -- # local node=0 00:03:51.010 09:41:39 -- setup/common.sh@19 -- # local var val 00:03:51.010 09:41:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.010 09:41:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.010 09:41:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.010 09:41:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.010 09:41:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.010 09:41:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.010 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237100 kB' 'MemFree: 7913560 kB' 'MemUsed: 4323540 kB' 'SwapCached: 0 kB' 'Active: 465436 kB' 'Inactive: 1421980 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771696 kB' 'Mapped: 49896 kB' 'AnonPages: 117336 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63440 kB' 'Slab: 161888 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 98448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.011 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.011 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.012 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.012 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.012 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.012 09:41:39 -- setup/common.sh@32 -- # continue 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.012 09:41:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.012 09:41:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.012 09:41:39 -- setup/common.sh@33 -- # echo 0 00:03:51.012 09:41:39 -- setup/common.sh@33 -- # return 0 00:03:51.012 node0=1024 expecting 1024 00:03:51.012 09:41:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.012 09:41:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.012 09:41:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.012 09:41:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.012 09:41:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:51.012 09:41:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:51.012 00:03:51.012 real 0m1.257s 00:03:51.012 user 0m0.502s 00:03:51.012 sys 0m0.767s 00:03:51.012 ************************************ 00:03:51.012 END TEST no_shrink_alloc 00:03:51.012 ************************************ 00:03:51.012 09:41:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.012 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:51.012 09:41:39 -- setup/hugepages.sh@217 -- # clear_hp 00:03:51.012 09:41:39 -- setup/hugepages.sh@37 -- # local node hp 00:03:51.012 09:41:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.012 09:41:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.012 09:41:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.012 09:41:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.012 09:41:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:51.012 09:41:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.012 09:41:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.012 ************************************ 00:03:51.012 END TEST hugepages 00:03:51.012 ************************************ 00:03:51.012 00:03:51.012 real 0m5.713s 00:03:51.012 user 0m2.191s 00:03:51.012 sys 0m3.239s 00:03:51.012 09:41:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.012 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:51.012 09:41:39 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:51.012 09:41:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.012 09:41:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.012 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:03:51.012 ************************************ 00:03:51.012 START TEST driver 00:03:51.012 ************************************ 00:03:51.012 09:41:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:51.274 * Looking for test storage... 00:03:51.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.274 09:41:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.274 09:41:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.274 09:41:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.274 09:41:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.274 09:41:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.274 09:41:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.274 09:41:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.274 09:41:40 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.274 09:41:40 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.274 09:41:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.274 09:41:40 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.274 09:41:40 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.274 09:41:40 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.274 09:41:40 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.274 09:41:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.274 09:41:40 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.274 09:41:40 -- scripts/common.sh@344 -- # : 1 00:03:51.274 09:41:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.274 09:41:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.274 09:41:40 -- scripts/common.sh@364 -- # decimal 1 00:03:51.274 09:41:40 -- scripts/common.sh@352 -- # local d=1 00:03:51.274 09:41:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.274 09:41:40 -- scripts/common.sh@354 -- # echo 1 00:03:51.274 09:41:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.274 09:41:40 -- scripts/common.sh@365 -- # decimal 2 00:03:51.274 09:41:40 -- scripts/common.sh@352 -- # local d=2 00:03:51.274 09:41:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.274 09:41:40 -- scripts/common.sh@354 -- # echo 2 00:03:51.274 09:41:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.274 09:41:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.274 09:41:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.274 09:41:40 -- scripts/common.sh@367 -- # return 0 00:03:51.274 09:41:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.274 09:41:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.274 --rc genhtml_branch_coverage=1 00:03:51.274 --rc genhtml_function_coverage=1 00:03:51.274 --rc genhtml_legend=1 00:03:51.274 --rc geninfo_all_blocks=1 00:03:51.274 --rc geninfo_unexecuted_blocks=1 00:03:51.274 00:03:51.274 ' 00:03:51.274 09:41:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.274 --rc genhtml_branch_coverage=1 00:03:51.274 --rc genhtml_function_coverage=1 00:03:51.274 --rc genhtml_legend=1 00:03:51.274 --rc geninfo_all_blocks=1 00:03:51.274 --rc geninfo_unexecuted_blocks=1 00:03:51.274 00:03:51.274 ' 00:03:51.274 09:41:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.274 --rc genhtml_branch_coverage=1 00:03:51.274 --rc genhtml_function_coverage=1 00:03:51.274 --rc genhtml_legend=1 00:03:51.274 --rc geninfo_all_blocks=1 00:03:51.274 --rc geninfo_unexecuted_blocks=1 00:03:51.274 00:03:51.274 ' 00:03:51.274 09:41:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.274 --rc genhtml_branch_coverage=1 00:03:51.274 --rc genhtml_function_coverage=1 00:03:51.274 --rc genhtml_legend=1 00:03:51.274 --rc geninfo_all_blocks=1 00:03:51.274 --rc geninfo_unexecuted_blocks=1 00:03:51.274 00:03:51.274 ' 00:03:51.274 09:41:40 -- setup/driver.sh@68 -- # setup reset 00:03:51.274 09:41:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.274 09:41:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.860 09:41:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.860 09:41:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.860 09:41:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.860 09:41:46 -- common/autotest_common.sh@10 -- # set +x 00:03:57.860 ************************************ 00:03:57.860 START TEST guess_driver 00:03:57.860 ************************************ 00:03:57.860 09:41:46 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:57.860 09:41:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.860 09:41:46 -- setup/driver.sh@47 -- # local fail=0 00:03:57.860 09:41:46 -- setup/driver.sh@49 -- # pick_driver 00:03:57.860 09:41:46 -- setup/driver.sh@36 -- # vfio 00:03:57.860 09:41:46 -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.860 09:41:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.860 09:41:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.860 09:41:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.860 09:41:46 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:57.860 09:41:46 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:57.860 09:41:46 -- setup/driver.sh@32 -- # return 1 00:03:57.860 09:41:46 -- setup/driver.sh@38 -- # uio 00:03:57.860 09:41:46 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:57.860 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:57.860 09:41:46 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:57.860 Looking for driver=uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:57.860 09:41:46 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.860 09:41:46 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:57.860 09:41:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.860 09:41:46 -- setup/driver.sh@45 -- # setup output config 00:03:57.860 09:41:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.860 09:41:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.122 09:41:47 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:58.122 09:41:47 -- setup/driver.sh@58 -- # continue 00:03:58.122 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.382 09:41:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.382 09:41:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:58.382 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.382 09:41:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.382 09:41:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:58.382 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.382 09:41:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.382 09:41:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:58.382 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.382 09:41:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:58.382 09:41:47 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:58.382 09:41:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.382 09:41:47 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:58.382 09:41:47 -- setup/driver.sh@65 -- # setup reset 00:03:58.382 09:41:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.382 09:41:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.005 00:04:05.005 real 0m7.210s 00:04:05.005 user 0m0.739s 00:04:05.005 sys 0m1.352s 00:04:05.005 ************************************ 00:04:05.005 END TEST guess_driver 00:04:05.005 ************************************ 00:04:05.005 09:41:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.005 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 00:04:05.005 real 0m13.365s 00:04:05.005 user 0m1.114s 00:04:05.005 sys 0m2.197s 00:04:05.005 09:41:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.005 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 ************************************ 00:04:05.005 END TEST driver 00:04:05.005 ************************************ 00:04:05.005 09:41:53 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:05.005 09:41:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.005 09:41:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.005 09:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 ************************************ 00:04:05.005 START TEST devices 00:04:05.005 ************************************ 00:04:05.005 09:41:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:05.005 * Looking for test storage... 00:04:05.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.005 09:41:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:05.005 09:41:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:05.005 09:41:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:05.005 09:41:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:05.005 09:41:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:05.005 09:41:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:05.005 09:41:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:05.005 09:41:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:05.005 09:41:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:05.005 09:41:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.005 09:41:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:05.005 09:41:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:05.005 09:41:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:05.005 09:41:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:05.005 09:41:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:05.005 09:41:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:05.005 09:41:53 -- scripts/common.sh@344 -- # : 1 00:04:05.005 09:41:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:05.005 09:41:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.005 09:41:53 -- scripts/common.sh@364 -- # decimal 1 00:04:05.005 09:41:53 -- scripts/common.sh@352 -- # local d=1 00:04:05.006 09:41:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.006 09:41:53 -- scripts/common.sh@354 -- # echo 1 00:04:05.006 09:41:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:05.006 09:41:53 -- scripts/common.sh@365 -- # decimal 2 00:04:05.006 09:41:53 -- scripts/common.sh@352 -- # local d=2 00:04:05.006 09:41:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.006 09:41:53 -- scripts/common.sh@354 -- # echo 2 00:04:05.006 09:41:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:05.006 09:41:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:05.006 09:41:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:05.006 09:41:53 -- scripts/common.sh@367 -- # return 0 00:04:05.006 09:41:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.006 09:41:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.006 --rc genhtml_branch_coverage=1 00:04:05.006 --rc genhtml_function_coverage=1 00:04:05.006 --rc genhtml_legend=1 00:04:05.006 --rc geninfo_all_blocks=1 00:04:05.006 --rc geninfo_unexecuted_blocks=1 00:04:05.006 00:04:05.006 ' 00:04:05.006 09:41:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.006 --rc genhtml_branch_coverage=1 00:04:05.006 --rc genhtml_function_coverage=1 00:04:05.006 --rc genhtml_legend=1 00:04:05.006 --rc geninfo_all_blocks=1 00:04:05.006 --rc geninfo_unexecuted_blocks=1 00:04:05.006 00:04:05.006 ' 00:04:05.006 09:41:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.006 --rc genhtml_branch_coverage=1 00:04:05.006 --rc genhtml_function_coverage=1 00:04:05.006 --rc genhtml_legend=1 00:04:05.006 --rc geninfo_all_blocks=1 00:04:05.006 --rc geninfo_unexecuted_blocks=1 00:04:05.006 00:04:05.006 ' 00:04:05.006 09:41:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.006 --rc genhtml_branch_coverage=1 00:04:05.006 --rc genhtml_function_coverage=1 00:04:05.006 --rc genhtml_legend=1 00:04:05.006 --rc geninfo_all_blocks=1 00:04:05.006 --rc geninfo_unexecuted_blocks=1 00:04:05.006 00:04:05.006 ' 00:04:05.006 09:41:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:05.006 09:41:53 -- setup/devices.sh@192 -- # setup reset 00:04:05.006 09:41:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.006 09:41:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.948 09:41:54 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:05.948 09:41:54 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:05.948 09:41:54 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:05.948 09:41:54 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:05.948 09:41:54 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:05.948 09:41:54 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:05.948 09:41:54 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:05.948 09:41:54 -- setup/devices.sh@196 -- # blocks=() 00:04:05.948 09:41:54 -- setup/devices.sh@196 -- # declare -a blocks 00:04:05.948 09:41:54 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:05.948 09:41:54 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:05.949 09:41:54 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:05.949 09:41:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.949 09:41:54 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:05.949 09:41:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:05.949 09:41:54 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:05.949 09:41:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:05.949 No valid GPT data, bailing 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # pt= 00:04:05.949 09:41:54 -- scripts/common.sh@394 -- # return 1 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:05.949 09:41:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:05.949 09:41:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:05.949 09:41:54 -- setup/common.sh@80 -- # echo 1073741824 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:05.949 09:41:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.949 09:41:54 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:05.949 09:41:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:05.949 09:41:54 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:05.949 09:41:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:05.949 No valid GPT data, bailing 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # pt= 00:04:05.949 09:41:54 -- scripts/common.sh@394 -- # return 1 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:05.949 09:41:54 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:05.949 09:41:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:05.949 09:41:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.949 09:41:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.949 09:41:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:05.949 09:41:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.949 09:41:54 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:05.949 09:41:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:05.949 09:41:54 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:05.949 09:41:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:05.949 No valid GPT data, bailing 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # pt= 00:04:05.949 09:41:54 -- scripts/common.sh@394 -- # return 1 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:05.949 09:41:54 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:05.949 09:41:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:05.949 09:41:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.949 09:41:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.949 09:41:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:05.949 09:41:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:05.949 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.949 09:41:54 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:05.949 09:41:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:05.949 09:41:54 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:05.949 09:41:54 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:05.949 09:41:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:05.949 No valid GPT data, bailing 00:04:05.949 09:41:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:06.210 09:41:54 -- scripts/common.sh@393 -- # pt= 00:04:06.210 09:41:54 -- scripts/common.sh@394 -- # return 1 00:04:06.210 09:41:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:06.210 09:41:54 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:06.210 09:41:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:06.210 09:41:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:06.210 09:41:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:06.210 09:41:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.210 09:41:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:06.210 09:41:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.210 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:06.210 09:41:54 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:06.210 09:41:54 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:06.210 09:41:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:06.210 09:41:54 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:06.210 09:41:54 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:06.210 09:41:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:06.210 No valid GPT data, bailing 00:04:06.210 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:06.210 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:06.210 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:06.210 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:06.210 09:41:55 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:06.210 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:06.210 09:41:55 -- setup/common.sh@80 -- # echo 6343335936 00:04:06.210 09:41:55 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:06.210 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.210 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:06.210 09:41:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.210 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:06.210 09:41:55 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:06.210 09:41:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:06.210 09:41:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:06.210 09:41:55 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:06.210 09:41:55 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:06.210 09:41:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:06.210 No valid GPT data, bailing 00:04:06.210 09:41:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:06.210 09:41:55 -- scripts/common.sh@393 -- # pt= 00:04:06.210 09:41:55 -- scripts/common.sh@394 -- # return 1 00:04:06.210 09:41:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:06.210 09:41:55 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:06.210 09:41:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:06.210 09:41:55 -- setup/common.sh@80 -- # echo 5368709120 00:04:06.210 09:41:55 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:06.210 09:41:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.210 09:41:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:06.210 09:41:55 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:06.210 09:41:55 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:06.210 09:41:55 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:06.210 09:41:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.210 09:41:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.210 09:41:55 -- common/autotest_common.sh@10 -- # set +x 00:04:06.210 ************************************ 00:04:06.210 START TEST nvme_mount 00:04:06.210 ************************************ 00:04:06.210 09:41:55 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:06.210 09:41:55 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:06.210 09:41:55 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:06.210 09:41:55 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.210 09:41:55 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:06.210 09:41:55 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:06.210 09:41:55 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:06.210 09:41:55 -- setup/common.sh@40 -- # local part_no=1 00:04:06.210 09:41:55 -- setup/common.sh@41 -- # local size=1073741824 00:04:06.210 09:41:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.210 09:41:55 -- setup/common.sh@44 -- # parts=() 00:04:06.210 09:41:55 -- setup/common.sh@44 -- # local parts 00:04:06.210 09:41:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.210 09:41:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.210 09:41:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.210 09:41:55 -- setup/common.sh@46 -- # (( part++ )) 00:04:06.210 09:41:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.210 09:41:55 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:06.210 09:41:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:06.210 09:41:55 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:07.595 Creating new GPT entries in memory. 00:04:07.595 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.595 other utilities. 00:04:07.595 09:41:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.595 09:41:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.595 09:41:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.595 09:41:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.595 09:41:56 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:08.546 Creating new GPT entries in memory. 00:04:08.546 The operation has completed successfully. 00:04:08.546 09:41:57 -- setup/common.sh@57 -- # (( part++ )) 00:04:08.546 09:41:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.546 09:41:57 -- setup/common.sh@62 -- # wait 53724 00:04:08.546 09:41:57 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.546 09:41:57 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:08.546 09:41:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.546 09:41:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:08.546 09:41:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:08.546 09:41:57 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.546 09:41:57 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.546 09:41:57 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:08.546 09:41:57 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:08.546 09:41:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.546 09:41:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.546 09:41:57 -- setup/devices.sh@53 -- # local found=0 00:04:08.546 09:41:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.546 09:41:57 -- setup/devices.sh@56 -- # : 00:04:08.546 09:41:57 -- setup/devices.sh@59 -- # local pci status 00:04:08.546 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.546 09:41:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:08.546 09:41:57 -- setup/devices.sh@47 -- # setup output config 00:04:08.546 09:41:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.546 09:41:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.546 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.546 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.807 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.807 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.807 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.807 09:41:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:08.807 09:41:57 -- setup/devices.sh@63 -- # found=1 00:04:08.807 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.807 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:08.807 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.068 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:09.068 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.068 09:41:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:09.068 09:41:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.068 09:41:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.068 09:41:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:09.068 09:41:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.068 09:41:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.068 09:41:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.068 09:41:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:09.068 09:41:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.068 09:41:58 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.328 09:41:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:09.328 09:41:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:09.328 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.328 09:41:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:09.328 09:41:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:09.588 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:09.588 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:09.588 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.588 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:09.588 09:41:58 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:09.588 09:41:58 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:09.588 09:41:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.588 09:41:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:09.588 09:41:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:09.588 09:41:58 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.588 09:41:58 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.588 09:41:58 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:09.588 09:41:58 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:09.588 09:41:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.588 09:41:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.588 09:41:58 -- setup/devices.sh@53 -- # local found=0 00:04:09.588 09:41:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.588 09:41:58 -- setup/devices.sh@56 -- # : 00:04:09.588 09:41:58 -- setup/devices.sh@59 -- # local pci status 00:04:09.588 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.588 09:41:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:09.589 09:41:58 -- setup/devices.sh@47 -- # setup output config 00:04:09.589 09:41:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.589 09:41:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.849 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:09.849 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.849 09:41:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:09.849 09:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.110 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.110 09:41:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:10.110 09:41:59 -- setup/devices.sh@63 -- # found=1 00:04:10.110 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.110 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.110 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.371 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.371 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.371 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.371 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.371 09:41:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.371 09:41:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:10.371 09:41:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.371 09:41:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.371 09:41:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:10.371 09:41:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.371 09:41:59 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:10.371 09:41:59 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:10.371 09:41:59 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:10.371 09:41:59 -- setup/devices.sh@50 -- # local mount_point= 00:04:10.371 09:41:59 -- setup/devices.sh@51 -- # local test_file= 00:04:10.371 09:41:59 -- setup/devices.sh@53 -- # local found=0 00:04:10.371 09:41:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.371 09:41:59 -- setup/devices.sh@59 -- # local pci status 00:04:10.371 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.371 09:41:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:10.371 09:41:59 -- setup/devices.sh@47 -- # setup output config 00:04:10.371 09:41:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.371 09:41:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.631 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.631 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.631 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.631 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.891 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.891 09:41:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:10.891 09:41:59 -- setup/devices.sh@63 -- # found=1 00:04:10.891 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.891 09:41:59 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:10.891 09:41:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.152 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.152 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.152 09:42:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:11.152 09:42:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.152 09:42:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.152 09:42:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:11.152 09:42:00 -- setup/devices.sh@68 -- # return 0 00:04:11.152 09:42:00 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:11.152 09:42:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.152 09:42:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:11.152 09:42:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:11.152 09:42:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:11.412 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.412 00:04:11.412 real 0m5.018s 00:04:11.412 user 0m0.964s 00:04:11.412 sys 0m1.286s 00:04:11.412 09:42:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.412 09:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:11.412 ************************************ 00:04:11.412 END TEST nvme_mount 00:04:11.412 ************************************ 00:04:11.412 09:42:00 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:11.412 09:42:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.412 09:42:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.412 09:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:11.412 ************************************ 00:04:11.412 START TEST dm_mount 00:04:11.412 ************************************ 00:04:11.412 09:42:00 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:11.412 09:42:00 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:11.412 09:42:00 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:11.412 09:42:00 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:11.412 09:42:00 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:11.412 09:42:00 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:11.412 09:42:00 -- setup/common.sh@40 -- # local part_no=2 00:04:11.412 09:42:00 -- setup/common.sh@41 -- # local size=1073741824 00:04:11.412 09:42:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:11.412 09:42:00 -- setup/common.sh@44 -- # parts=() 00:04:11.412 09:42:00 -- setup/common.sh@44 -- # local parts 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.412 09:42:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.412 09:42:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.412 09:42:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.412 09:42:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:11.412 09:42:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:11.412 09:42:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:12.353 Creating new GPT entries in memory. 00:04:12.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.353 other utilities. 00:04:12.353 09:42:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.353 09:42:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.353 09:42:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.353 09:42:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.353 09:42:01 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:13.738 Creating new GPT entries in memory. 00:04:13.739 The operation has completed successfully. 00:04:13.739 09:42:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:13.739 09:42:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.739 09:42:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.739 09:42:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.739 09:42:02 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:14.682 The operation has completed successfully. 00:04:14.682 09:42:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:14.682 09:42:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.682 09:42:03 -- setup/common.sh@62 -- # wait 54352 00:04:14.682 09:42:03 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:14.682 09:42:03 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.682 09:42:03 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.682 09:42:03 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:14.682 09:42:03 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:14.682 09:42:03 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.682 09:42:03 -- setup/devices.sh@161 -- # break 00:04:14.682 09:42:03 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.682 09:42:03 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:14.682 09:42:03 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:14.682 09:42:03 -- setup/devices.sh@166 -- # dm=dm-0 00:04:14.682 09:42:03 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:14.682 09:42:03 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:14.682 09:42:03 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.682 09:42:03 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:14.682 09:42:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.682 09:42:03 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.682 09:42:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:14.682 09:42:03 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.682 09:42:03 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.682 09:42:03 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:14.682 09:42:03 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:14.682 09:42:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.682 09:42:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.682 09:42:03 -- setup/devices.sh@53 -- # local found=0 00:04:14.682 09:42:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.682 09:42:03 -- setup/devices.sh@56 -- # : 00:04:14.683 09:42:03 -- setup/devices.sh@59 -- # local pci status 00:04:14.683 09:42:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.683 09:42:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:14.683 09:42:03 -- setup/devices.sh@47 -- # setup output config 00:04:14.683 09:42:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.683 09:42:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.683 09:42:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.683 09:42:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.945 09:42:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:14.945 09:42:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.208 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.208 09:42:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:15.208 09:42:04 -- setup/devices.sh@63 -- # found=1 00:04:15.208 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.208 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.208 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.470 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.470 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.470 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.470 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.470 09:42:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.470 09:42:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:15.470 09:42:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:15.470 09:42:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.470 09:42:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:15.470 09:42:04 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:15.470 09:42:04 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:15.470 09:42:04 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:15.470 09:42:04 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:15.470 09:42:04 -- setup/devices.sh@50 -- # local mount_point= 00:04:15.470 09:42:04 -- setup/devices.sh@51 -- # local test_file= 00:04:15.470 09:42:04 -- setup/devices.sh@53 -- # local found=0 00:04:15.470 09:42:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.470 09:42:04 -- setup/devices.sh@59 -- # local pci status 00:04:15.470 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.470 09:42:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:15.470 09:42:04 -- setup/devices.sh@47 -- # setup output config 00:04:15.470 09:42:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.470 09:42:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.470 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.470 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.734 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.734 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.995 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.995 09:42:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:15.995 09:42:04 -- setup/devices.sh@63 -- # found=1 00:04:15.995 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.995 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.995 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.995 09:42:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:15.995 09:42:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.255 09:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:16.255 09:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.255 09:42:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.255 09:42:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.255 09:42:05 -- setup/devices.sh@68 -- # return 0 00:04:16.255 09:42:05 -- setup/devices.sh@187 -- # cleanup_dm 00:04:16.255 09:42:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.255 09:42:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:16.255 09:42:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:16.255 09:42:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:16.255 09:42:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:16.255 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.255 09:42:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:16.255 09:42:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:16.255 00:04:16.255 real 0m4.940s 00:04:16.255 user 0m0.641s 00:04:16.255 sys 0m0.901s 00:04:16.255 09:42:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.255 ************************************ 00:04:16.255 END TEST dm_mount 00:04:16.255 ************************************ 00:04:16.255 09:42:05 -- common/autotest_common.sh@10 -- # set +x 00:04:16.255 09:42:05 -- setup/devices.sh@1 -- # cleanup 00:04:16.255 09:42:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:16.255 09:42:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.255 09:42:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:16.255 09:42:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:16.255 09:42:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:16.255 09:42:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:16.514 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:16.514 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:16.514 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.514 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:16.775 09:42:05 -- setup/devices.sh@12 -- # cleanup_dm 00:04:16.775 09:42:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.775 09:42:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:16.775 09:42:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:16.775 09:42:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:16.775 09:42:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:16.775 09:42:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:16.775 00:04:16.775 real 0m12.144s 00:04:16.775 user 0m2.451s 00:04:16.775 sys 0m2.897s 00:04:16.775 09:42:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.775 ************************************ 00:04:16.775 END TEST devices 00:04:16.775 ************************************ 00:04:16.775 09:42:05 -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 00:04:16.775 real 0m42.952s 00:04:16.775 user 0m8.162s 00:04:16.775 sys 0m11.908s 00:04:16.775 09:42:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.775 ************************************ 00:04:16.775 END TEST setup.sh 00:04:16.775 ************************************ 00:04:16.775 09:42:05 -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 09:42:05 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:16.775 Hugepages 00:04:16.775 node hugesize free / total 00:04:16.775 node0 1048576kB 0 / 0 00:04:16.775 node0 2048kB 2048 / 2048 00:04:16.775 00:04:16.775 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.034 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:17.034 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:17.034 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:17.034 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:17.292 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:17.292 09:42:06 -- spdk/autotest.sh@128 -- # uname -s 00:04:17.292 09:42:06 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:17.292 09:42:06 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:17.293 09:42:06 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.115 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.116 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.116 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.116 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.116 09:42:07 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:19.492 09:42:08 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:19.492 09:42:08 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:19.492 09:42:08 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.492 09:42:08 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:19.492 09:42:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:19.492 09:42:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:19.492 09:42:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.492 09:42:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.492 09:42:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:19.492 09:42:08 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:19.492 09:42:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:19.492 09:42:08 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.753 Waiting for block devices as requested 00:04:19.753 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:19.753 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.013 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.013 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.328 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:25.328 09:42:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:25.328 09:42:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1552 -- # continue 00:04:25.328 09:42:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:25.328 09:42:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1552 -- # continue 00:04:25.328 09:42:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:25.328 09:42:13 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:25.328 09:42:13 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1552 -- # continue 00:04:25.328 09:42:13 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:25.328 09:42:13 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:25.328 09:42:13 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:25.329 09:42:13 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:25.329 09:42:14 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:25.329 09:42:14 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:25.329 09:42:14 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:25.329 09:42:14 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:25.329 09:42:14 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:25.329 09:42:14 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:25.329 09:42:14 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:25.329 09:42:14 -- common/autotest_common.sh@1552 -- # continue 00:04:25.329 09:42:14 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:25.329 09:42:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.329 09:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.329 09:42:14 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:25.329 09:42:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.329 09:42:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.329 09:42:14 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.894 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.894 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.894 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.153 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.153 09:42:15 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:26.153 09:42:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.153 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.153 09:42:15 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:26.153 09:42:15 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:26.153 09:42:15 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.153 09:42:15 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:26.153 09:42:15 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:26.153 09:42:15 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:26.153 09:42:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:26.153 09:42:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:26.153 09:42:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.153 09:42:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:26.153 09:42:15 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:26.153 09:42:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:26.153 09:42:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.153 09:42:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:26.153 09:42:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.153 09:42:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:26.153 09:42:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.153 09:42:15 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:26.153 09:42:15 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:26.153 09:42:15 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.153 09:42:15 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:26.153 09:42:15 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:26.153 09:42:15 -- common/autotest_common.sh@1588 -- # return 0 00:04:26.153 09:42:15 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:26.153 09:42:15 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:26.153 09:42:15 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:26.153 09:42:15 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:26.153 09:42:15 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:26.153 09:42:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.153 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.153 09:42:15 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:26.153 09:42:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.153 09:42:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.153 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.153 ************************************ 00:04:26.153 START TEST env 00:04:26.153 ************************************ 00:04:26.153 09:42:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:26.411 * Looking for test storage... 00:04:26.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:26.411 09:42:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.411 09:42:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.411 09:42:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.411 09:42:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.411 09:42:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.411 09:42:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.411 09:42:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.411 09:42:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.411 09:42:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.411 09:42:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.411 09:42:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.411 09:42:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.411 09:42:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.411 09:42:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.411 09:42:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.411 09:42:15 -- scripts/common.sh@344 -- # : 1 00:04:26.411 09:42:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.411 09:42:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.411 09:42:15 -- scripts/common.sh@364 -- # decimal 1 00:04:26.411 09:42:15 -- scripts/common.sh@352 -- # local d=1 00:04:26.411 09:42:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.411 09:42:15 -- scripts/common.sh@354 -- # echo 1 00:04:26.411 09:42:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.411 09:42:15 -- scripts/common.sh@365 -- # decimal 2 00:04:26.411 09:42:15 -- scripts/common.sh@352 -- # local d=2 00:04:26.411 09:42:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.411 09:42:15 -- scripts/common.sh@354 -- # echo 2 00:04:26.411 09:42:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.411 09:42:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.411 09:42:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.411 09:42:15 -- scripts/common.sh@367 -- # return 0 00:04:26.411 09:42:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.411 --rc genhtml_branch_coverage=1 00:04:26.411 --rc genhtml_function_coverage=1 00:04:26.411 --rc genhtml_legend=1 00:04:26.411 --rc geninfo_all_blocks=1 00:04:26.411 --rc geninfo_unexecuted_blocks=1 00:04:26.411 00:04:26.411 ' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.411 --rc genhtml_branch_coverage=1 00:04:26.411 --rc genhtml_function_coverage=1 00:04:26.411 --rc genhtml_legend=1 00:04:26.411 --rc geninfo_all_blocks=1 00:04:26.411 --rc geninfo_unexecuted_blocks=1 00:04:26.411 00:04:26.411 ' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.411 --rc genhtml_branch_coverage=1 00:04:26.411 --rc genhtml_function_coverage=1 00:04:26.411 --rc genhtml_legend=1 00:04:26.411 --rc geninfo_all_blocks=1 00:04:26.411 --rc geninfo_unexecuted_blocks=1 00:04:26.411 00:04:26.411 ' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.411 --rc genhtml_branch_coverage=1 00:04:26.411 --rc genhtml_function_coverage=1 00:04:26.411 --rc genhtml_legend=1 00:04:26.411 --rc geninfo_all_blocks=1 00:04:26.411 --rc geninfo_unexecuted_blocks=1 00:04:26.411 00:04:26.411 ' 00:04:26.411 09:42:15 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.411 09:42:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.411 09:42:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.411 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.411 ************************************ 00:04:26.411 START TEST env_memory 00:04:26.411 ************************************ 00:04:26.411 09:42:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.411 00:04:26.411 00:04:26.411 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.411 http://cunit.sourceforge.net/ 00:04:26.411 00:04:26.411 00:04:26.411 Suite: memory 00:04:26.411 Test: alloc and free memory map ...[2024-12-15 09:42:15.318444] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:26.411 passed 00:04:26.411 Test: mem map translation ...[2024-12-15 09:42:15.357216] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:26.411 [2024-12-15 09:42:15.357341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:26.411 [2024-12-15 09:42:15.357452] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:26.411 [2024-12-15 09:42:15.357569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:26.411 passed 00:04:26.411 Test: mem map registration ...[2024-12-15 09:42:15.425795] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:26.411 [2024-12-15 09:42:15.425897] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:26.670 passed 00:04:26.670 Test: mem map adjacent registrations ...passed 00:04:26.670 00:04:26.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.670 suites 1 1 n/a 0 0 00:04:26.670 tests 4 4 4 0 0 00:04:26.670 asserts 152 152 152 0 n/a 00:04:26.670 00:04:26.670 Elapsed time = 0.233 seconds 00:04:26.670 00:04:26.670 real 0m0.270s 00:04:26.670 user 0m0.244s 00:04:26.670 sys 0m0.016s 00:04:26.670 09:42:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.670 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.670 ************************************ 00:04:26.670 END TEST env_memory 00:04:26.670 ************************************ 00:04:26.670 09:42:15 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.670 09:42:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.670 09:42:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.670 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.670 ************************************ 00:04:26.670 START TEST env_vtophys 00:04:26.670 ************************************ 00:04:26.670 09:42:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.670 EAL: lib.eal log level changed from notice to debug 00:04:26.670 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 1 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 2 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 3 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 4 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 5 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 6 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 7 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 8 as core 0 on socket 0 00:04:26.670 EAL: Detected lcore 9 as core 0 on socket 0 00:04:26.670 EAL: Maximum logical cores by configuration: 128 00:04:26.670 EAL: Detected CPU lcores: 10 00:04:26.670 EAL: Detected NUMA nodes: 1 00:04:26.670 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:26.670 EAL: Detected shared linkage of DPDK 00:04:26.670 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.670 EAL: Selected IOVA mode 'PA' 00:04:26.670 EAL: Probing VFIO support... 00:04:26.670 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.670 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:26.670 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.670 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.670 EAL: Setting up physically contiguous memory... 00:04:26.670 EAL: Setting maximum number of open files to 524288 00:04:26.670 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.670 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.670 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.670 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.670 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.670 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.670 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.670 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.670 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.670 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.670 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.670 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.670 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.670 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.670 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.670 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.670 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.670 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.670 EAL: Hugepages will be freed exactly as allocated. 00:04:26.670 EAL: No shared files mode enabled, IPC is disabled 00:04:26.670 EAL: No shared files mode enabled, IPC is disabled 00:04:26.927 EAL: TSC frequency is ~2600000 KHz 00:04:26.927 EAL: Main lcore 0 is ready (tid=7ff60620da40;cpuset=[0]) 00:04:26.927 EAL: Trying to obtain current memory policy. 00:04:26.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.927 EAL: Restoring previous memory policy: 0 00:04:26.927 EAL: request: mp_malloc_sync 00:04:26.927 EAL: No shared files mode enabled, IPC is disabled 00:04:26.927 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.927 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.927 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.927 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.927 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:26.927 00:04:26.927 00:04:26.927 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.927 http://cunit.sourceforge.net/ 00:04:26.927 00:04:26.927 00:04:26.927 Suite: components_suite 00:04:27.184 Test: vtophys_malloc_test ...passed 00:04:27.184 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.184 EAL: Restoring previous memory policy: 4 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.184 EAL: Trying to obtain current memory policy. 00:04:27.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.184 EAL: Restoring previous memory policy: 4 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.184 EAL: Trying to obtain current memory policy. 00:04:27.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.184 EAL: Restoring previous memory policy: 4 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.184 EAL: Trying to obtain current memory policy. 00:04:27.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.184 EAL: Restoring previous memory policy: 4 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.184 EAL: request: mp_malloc_sync 00:04:27.184 EAL: No shared files mode enabled, IPC is disabled 00:04:27.184 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.184 EAL: Trying to obtain current memory policy. 00:04:27.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.184 EAL: Restoring previous memory policy: 4 00:04:27.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.185 EAL: request: mp_malloc_sync 00:04:27.185 EAL: No shared files mode enabled, IPC is disabled 00:04:27.185 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.185 EAL: request: mp_malloc_sync 00:04:27.185 EAL: No shared files mode enabled, IPC is disabled 00:04:27.185 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.185 EAL: Trying to obtain current memory policy. 00:04:27.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.443 EAL: Restoring previous memory policy: 4 00:04:27.443 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.443 EAL: request: mp_malloc_sync 00:04:27.443 EAL: No shared files mode enabled, IPC is disabled 00:04:27.443 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.443 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.443 EAL: request: mp_malloc_sync 00:04:27.443 EAL: No shared files mode enabled, IPC is disabled 00:04:27.443 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.443 EAL: Trying to obtain current memory policy. 00:04:27.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.443 EAL: Restoring previous memory policy: 4 00:04:27.443 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.443 EAL: request: mp_malloc_sync 00:04:27.443 EAL: No shared files mode enabled, IPC is disabled 00:04:27.443 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.701 EAL: request: mp_malloc_sync 00:04:27.701 EAL: No shared files mode enabled, IPC is disabled 00:04:27.701 EAL: Heap on socket 0 was shrunk by 130MB 00:04:27.701 EAL: Trying to obtain current memory policy. 00:04:27.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.701 EAL: Restoring previous memory policy: 4 00:04:27.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.701 EAL: request: mp_malloc_sync 00:04:27.701 EAL: No shared files mode enabled, IPC is disabled 00:04:27.701 EAL: Heap on socket 0 was expanded by 258MB 00:04:27.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.217 EAL: request: mp_malloc_sync 00:04:28.217 EAL: No shared files mode enabled, IPC is disabled 00:04:28.217 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.475 EAL: Trying to obtain current memory policy. 00:04:28.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.475 EAL: Restoring previous memory policy: 4 00:04:28.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.475 EAL: request: mp_malloc_sync 00:04:28.475 EAL: No shared files mode enabled, IPC is disabled 00:04:28.475 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.042 EAL: request: mp_malloc_sync 00:04:29.042 EAL: No shared files mode enabled, IPC is disabled 00:04:29.042 EAL: Heap on socket 0 was shrunk by 514MB 00:04:29.609 EAL: Trying to obtain current memory policy. 00:04:29.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.609 EAL: Restoring previous memory policy: 4 00:04:29.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.609 EAL: request: mp_malloc_sync 00:04:29.609 EAL: No shared files mode enabled, IPC is disabled 00:04:29.609 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.984 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.984 EAL: request: mp_malloc_sync 00:04:30.984 EAL: No shared files mode enabled, IPC is disabled 00:04:30.984 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.934 passed 00:04:31.934 00:04:31.934 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.934 suites 1 1 n/a 0 0 00:04:31.934 tests 2 2 2 0 0 00:04:31.934 asserts 5390 5390 5390 0 n/a 00:04:31.934 00:04:31.934 Elapsed time = 4.828 seconds 00:04:31.934 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.934 EAL: request: mp_malloc_sync 00:04:31.934 EAL: No shared files mode enabled, IPC is disabled 00:04:31.934 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.934 EAL: No shared files mode enabled, IPC is disabled 00:04:31.934 EAL: No shared files mode enabled, IPC is disabled 00:04:31.934 EAL: No shared files mode enabled, IPC is disabled 00:04:31.934 00:04:31.934 real 0m5.079s 00:04:31.934 user 0m4.291s 00:04:31.934 sys 0m0.639s 00:04:31.934 09:42:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.934 09:42:20 -- common/autotest_common.sh@10 -- # set +x 00:04:31.934 ************************************ 00:04:31.934 END TEST env_vtophys 00:04:31.934 ************************************ 00:04:31.934 09:42:20 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.934 09:42:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.934 09:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.934 09:42:20 -- common/autotest_common.sh@10 -- # set +x 00:04:31.934 ************************************ 00:04:31.934 START TEST env_pci 00:04:31.935 ************************************ 00:04:31.935 09:42:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.935 00:04:31.935 00:04:31.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.935 http://cunit.sourceforge.net/ 00:04:31.935 00:04:31.935 00:04:31.935 Suite: pci 00:04:31.935 Test: pci_hook ...[2024-12-15 09:42:20.719974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56054 has claimed it 00:04:31.935 passed 00:04:31.935 00:04:31.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.935 suites 1 1 n/a 0 0 00:04:31.935 tests 1 1 1 0 0 00:04:31.935 asserts 25 25 25 0 n/a 00:04:31.935 00:04:31.935 Elapsed time = 0.005 seconds 00:04:31.935 EAL: Cannot find device (10000:00:01.0) 00:04:31.935 EAL: Failed to attach device on primary process 00:04:31.935 00:04:31.935 real 0m0.060s 00:04:31.935 user 0m0.029s 00:04:31.935 sys 0m0.031s 00:04:31.935 ************************************ 00:04:31.935 END TEST env_pci 00:04:31.935 ************************************ 00:04:31.935 09:42:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.935 09:42:20 -- common/autotest_common.sh@10 -- # set +x 00:04:31.935 09:42:20 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.935 09:42:20 -- env/env.sh@15 -- # uname 00:04:31.935 09:42:20 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.935 09:42:20 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.935 09:42:20 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.935 09:42:20 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:31.935 09:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.935 09:42:20 -- common/autotest_common.sh@10 -- # set +x 00:04:31.935 ************************************ 00:04:31.935 START TEST env_dpdk_post_init 00:04:31.935 ************************************ 00:04:31.935 09:42:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.935 EAL: Detected CPU lcores: 10 00:04:31.935 EAL: Detected NUMA nodes: 1 00:04:31.935 EAL: Detected shared linkage of DPDK 00:04:31.935 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.935 EAL: Selected IOVA mode 'PA' 00:04:32.199 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.199 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:32.199 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:32.199 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:32.199 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:32.199 Starting DPDK initialization... 00:04:32.199 Starting SPDK post initialization... 00:04:32.199 SPDK NVMe probe 00:04:32.199 Attaching to 0000:00:06.0 00:04:32.199 Attaching to 0000:00:07.0 00:04:32.199 Attaching to 0000:00:08.0 00:04:32.199 Attaching to 0000:00:09.0 00:04:32.199 Attached to 0000:00:06.0 00:04:32.199 Attached to 0000:00:07.0 00:04:32.199 Attached to 0000:00:09.0 00:04:32.199 Attached to 0000:00:08.0 00:04:32.199 Cleaning up... 00:04:32.199 00:04:32.199 real 0m0.217s 00:04:32.199 user 0m0.054s 00:04:32.199 sys 0m0.066s 00:04:32.199 ************************************ 00:04:32.199 END TEST env_dpdk_post_init 00:04:32.199 ************************************ 00:04:32.199 09:42:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.199 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.199 09:42:21 -- env/env.sh@26 -- # uname 00:04:32.199 09:42:21 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.199 09:42:21 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.199 09:42:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.199 09:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.199 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.199 ************************************ 00:04:32.199 START TEST env_mem_callbacks 00:04:32.199 ************************************ 00:04:32.199 09:42:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.199 EAL: Detected CPU lcores: 10 00:04:32.199 EAL: Detected NUMA nodes: 1 00:04:32.199 EAL: Detected shared linkage of DPDK 00:04:32.199 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.199 EAL: Selected IOVA mode 'PA' 00:04:32.199 00:04:32.199 00:04:32.199 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.199 http://cunit.sourceforge.net/ 00:04:32.199 00:04:32.199 00:04:32.199 Suite: memory 00:04:32.199 Test: test ... 00:04:32.199 register 0x200000200000 2097152 00:04:32.199 malloc 3145728 00:04:32.199 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.199 register 0x200000400000 4194304 00:04:32.457 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.457 malloc 64 00:04:32.457 buf 0x2000004ffec0 len 64 PASSED 00:04:32.457 malloc 4194304 00:04:32.457 register 0x200000800000 6291456 00:04:32.457 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.457 free 0x2000004fffc0 3145728 00:04:32.457 free 0x2000004ffec0 64 00:04:32.457 unregister 0x200000400000 4194304 PASSED 00:04:32.457 free 0x2000009fffc0 4194304 00:04:32.457 unregister 0x200000800000 6291456 PASSED 00:04:32.457 malloc 8388608 00:04:32.457 register 0x200000400000 10485760 00:04:32.457 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.457 free 0x2000005fffc0 8388608 00:04:32.457 unregister 0x200000400000 10485760 PASSED 00:04:32.457 passed 00:04:32.457 00:04:32.457 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.457 suites 1 1 n/a 0 0 00:04:32.457 tests 1 1 1 0 0 00:04:32.457 asserts 15 15 15 0 n/a 00:04:32.457 00:04:32.457 Elapsed time = 0.040 seconds 00:04:32.457 00:04:32.457 real 0m0.209s 00:04:32.457 user 0m0.056s 00:04:32.457 sys 0m0.050s 00:04:32.457 09:42:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.457 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 END TEST env_mem_callbacks 00:04:32.457 ************************************ 00:04:32.457 00:04:32.457 real 0m6.173s 00:04:32.457 user 0m4.827s 00:04:32.457 sys 0m0.989s 00:04:32.457 09:42:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.457 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 END TEST env 00:04:32.457 ************************************ 00:04:32.457 09:42:21 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.457 09:42:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.457 09:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.457 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 START TEST rpc 00:04:32.457 ************************************ 00:04:32.457 09:42:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.457 * Looking for test storage... 00:04:32.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.457 09:42:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:32.457 09:42:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:32.457 09:42:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:32.457 09:42:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:32.457 09:42:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:32.457 09:42:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:32.458 09:42:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:32.458 09:42:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:32.458 09:42:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:32.458 09:42:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.458 09:42:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:32.458 09:42:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:32.458 09:42:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:32.458 09:42:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:32.458 09:42:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:32.458 09:42:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:32.458 09:42:21 -- scripts/common.sh@344 -- # : 1 00:04:32.458 09:42:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:32.458 09:42:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.458 09:42:21 -- scripts/common.sh@364 -- # decimal 1 00:04:32.458 09:42:21 -- scripts/common.sh@352 -- # local d=1 00:04:32.458 09:42:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.458 09:42:21 -- scripts/common.sh@354 -- # echo 1 00:04:32.458 09:42:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:32.458 09:42:21 -- scripts/common.sh@365 -- # decimal 2 00:04:32.458 09:42:21 -- scripts/common.sh@352 -- # local d=2 00:04:32.458 09:42:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.458 09:42:21 -- scripts/common.sh@354 -- # echo 2 00:04:32.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.458 09:42:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:32.458 09:42:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:32.458 09:42:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:32.458 09:42:21 -- scripts/common.sh@367 -- # return 0 00:04:32.458 09:42:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.458 09:42:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:32.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.458 --rc genhtml_branch_coverage=1 00:04:32.458 --rc genhtml_function_coverage=1 00:04:32.458 --rc genhtml_legend=1 00:04:32.458 --rc geninfo_all_blocks=1 00:04:32.458 --rc geninfo_unexecuted_blocks=1 00:04:32.458 00:04:32.458 ' 00:04:32.458 09:42:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:32.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.458 --rc genhtml_branch_coverage=1 00:04:32.458 --rc genhtml_function_coverage=1 00:04:32.458 --rc genhtml_legend=1 00:04:32.458 --rc geninfo_all_blocks=1 00:04:32.458 --rc geninfo_unexecuted_blocks=1 00:04:32.458 00:04:32.458 ' 00:04:32.458 09:42:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:32.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.458 --rc genhtml_branch_coverage=1 00:04:32.458 --rc genhtml_function_coverage=1 00:04:32.458 --rc genhtml_legend=1 00:04:32.458 --rc geninfo_all_blocks=1 00:04:32.458 --rc geninfo_unexecuted_blocks=1 00:04:32.458 00:04:32.458 ' 00:04:32.458 09:42:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:32.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.458 --rc genhtml_branch_coverage=1 00:04:32.458 --rc genhtml_function_coverage=1 00:04:32.458 --rc genhtml_legend=1 00:04:32.458 --rc geninfo_all_blocks=1 00:04:32.458 --rc geninfo_unexecuted_blocks=1 00:04:32.458 00:04:32.458 ' 00:04:32.458 09:42:21 -- rpc/rpc.sh@65 -- # spdk_pid=56180 00:04:32.458 09:42:21 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.458 09:42:21 -- rpc/rpc.sh@67 -- # waitforlisten 56180 00:04:32.458 09:42:21 -- common/autotest_common.sh@829 -- # '[' -z 56180 ']' 00:04:32.458 09:42:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.458 09:42:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.458 09:42:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.458 09:42:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.458 09:42:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.458 09:42:21 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.732 [2024-12-15 09:42:21.551468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:32.732 [2024-12-15 09:42:21.551578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56180 ] 00:04:32.732 [2024-12-15 09:42:21.700491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.990 [2024-12-15 09:42:21.846359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.990 [2024-12-15 09:42:21.846621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.990 [2024-12-15 09:42:21.846638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56180' to capture a snapshot of events at runtime. 00:04:32.990 [2024-12-15 09:42:21.846646] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56180 for offline analysis/debug. 00:04:32.990 [2024-12-15 09:42:21.846673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.556 09:42:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.556 09:42:22 -- common/autotest_common.sh@862 -- # return 0 00:04:33.556 09:42:22 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.556 09:42:22 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.556 09:42:22 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.556 09:42:22 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.556 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.556 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 ************************************ 00:04:33.556 START TEST rpc_integrity 00:04:33.556 ************************************ 00:04:33.556 09:42:22 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:33.556 09:42:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.556 09:42:22 -- rpc/rpc.sh@13 -- # jq length 00:04:33.556 09:42:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.556 09:42:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.556 09:42:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.556 { 00:04:33.556 "name": "Malloc0", 00:04:33.556 "aliases": [ 00:04:33.556 "c9ce9703-a7b5-4fef-b21a-11d792712b9f" 00:04:33.556 ], 00:04:33.556 "product_name": "Malloc disk", 00:04:33.556 "block_size": 512, 00:04:33.556 "num_blocks": 16384, 00:04:33.556 "uuid": "c9ce9703-a7b5-4fef-b21a-11d792712b9f", 00:04:33.556 "assigned_rate_limits": { 00:04:33.556 "rw_ios_per_sec": 0, 00:04:33.556 "rw_mbytes_per_sec": 0, 00:04:33.556 "r_mbytes_per_sec": 0, 00:04:33.556 "w_mbytes_per_sec": 0 00:04:33.556 }, 00:04:33.556 "claimed": false, 00:04:33.556 "zoned": false, 00:04:33.556 "supported_io_types": { 00:04:33.556 "read": true, 00:04:33.556 "write": true, 00:04:33.556 "unmap": true, 00:04:33.556 "write_zeroes": true, 00:04:33.556 "flush": true, 00:04:33.556 "reset": true, 00:04:33.556 "compare": false, 00:04:33.556 "compare_and_write": false, 00:04:33.556 "abort": true, 00:04:33.556 "nvme_admin": false, 00:04:33.556 "nvme_io": false 00:04:33.556 }, 00:04:33.556 "memory_domains": [ 00:04:33.556 { 00:04:33.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.556 "dma_device_type": 2 00:04:33.556 } 00:04:33.556 ], 00:04:33.556 "driver_specific": {} 00:04:33.556 } 00:04:33.556 ]' 00:04:33.556 09:42:22 -- rpc/rpc.sh@17 -- # jq length 00:04:33.556 09:42:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.556 09:42:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 [2024-12-15 09:42:22.465774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.556 [2024-12-15 09:42:22.465820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.556 [2024-12-15 09:42:22.465837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:33.556 [2024-12-15 09:42:22.465845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.556 [2024-12-15 09:42:22.467534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.556 [2024-12-15 09:42:22.467564] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.556 Passthru0 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.556 { 00:04:33.556 "name": "Malloc0", 00:04:33.556 "aliases": [ 00:04:33.556 "c9ce9703-a7b5-4fef-b21a-11d792712b9f" 00:04:33.556 ], 00:04:33.556 "product_name": "Malloc disk", 00:04:33.556 "block_size": 512, 00:04:33.556 "num_blocks": 16384, 00:04:33.556 "uuid": "c9ce9703-a7b5-4fef-b21a-11d792712b9f", 00:04:33.556 "assigned_rate_limits": { 00:04:33.556 "rw_ios_per_sec": 0, 00:04:33.556 "rw_mbytes_per_sec": 0, 00:04:33.556 "r_mbytes_per_sec": 0, 00:04:33.556 "w_mbytes_per_sec": 0 00:04:33.556 }, 00:04:33.556 "claimed": true, 00:04:33.556 "claim_type": "exclusive_write", 00:04:33.556 "zoned": false, 00:04:33.556 "supported_io_types": { 00:04:33.556 "read": true, 00:04:33.556 "write": true, 00:04:33.556 "unmap": true, 00:04:33.556 "write_zeroes": true, 00:04:33.556 "flush": true, 00:04:33.556 "reset": true, 00:04:33.556 "compare": false, 00:04:33.556 "compare_and_write": false, 00:04:33.556 "abort": true, 00:04:33.556 "nvme_admin": false, 00:04:33.556 "nvme_io": false 00:04:33.556 }, 00:04:33.556 "memory_domains": [ 00:04:33.556 { 00:04:33.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.556 "dma_device_type": 2 00:04:33.556 } 00:04:33.556 ], 00:04:33.556 "driver_specific": {} 00:04:33.556 }, 00:04:33.556 { 00:04:33.556 "name": "Passthru0", 00:04:33.556 "aliases": [ 00:04:33.556 "b3bb62dc-5d94-579d-a2dd-d1fc1fcea4c8" 00:04:33.556 ], 00:04:33.556 "product_name": "passthru", 00:04:33.556 "block_size": 512, 00:04:33.556 "num_blocks": 16384, 00:04:33.556 "uuid": "b3bb62dc-5d94-579d-a2dd-d1fc1fcea4c8", 00:04:33.556 "assigned_rate_limits": { 00:04:33.556 "rw_ios_per_sec": 0, 00:04:33.556 "rw_mbytes_per_sec": 0, 00:04:33.556 "r_mbytes_per_sec": 0, 00:04:33.556 "w_mbytes_per_sec": 0 00:04:33.556 }, 00:04:33.556 "claimed": false, 00:04:33.556 "zoned": false, 00:04:33.556 "supported_io_types": { 00:04:33.556 "read": true, 00:04:33.556 "write": true, 00:04:33.556 "unmap": true, 00:04:33.556 "write_zeroes": true, 00:04:33.556 "flush": true, 00:04:33.556 "reset": true, 00:04:33.556 "compare": false, 00:04:33.556 "compare_and_write": false, 00:04:33.556 "abort": true, 00:04:33.556 "nvme_admin": false, 00:04:33.556 "nvme_io": false 00:04:33.556 }, 00:04:33.556 "memory_domains": [ 00:04:33.556 { 00:04:33.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.556 "dma_device_type": 2 00:04:33.556 } 00:04:33.556 ], 00:04:33.556 "driver_specific": { 00:04:33.556 "passthru": { 00:04:33.556 "name": "Passthru0", 00:04:33.556 "base_bdev_name": "Malloc0" 00:04:33.556 } 00:04:33.556 } 00:04:33.556 } 00:04:33.556 ]' 00:04:33.556 09:42:22 -- rpc/rpc.sh@21 -- # jq length 00:04:33.556 09:42:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.556 09:42:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.556 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.556 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.556 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.556 09:42:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.556 09:42:22 -- rpc/rpc.sh@26 -- # jq length 00:04:33.815 ************************************ 00:04:33.815 END TEST rpc_integrity 00:04:33.815 ************************************ 00:04:33.815 09:42:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.815 00:04:33.815 real 0m0.233s 00:04:33.815 user 0m0.121s 00:04:33.815 sys 0m0.030s 00:04:33.815 09:42:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.815 09:42:22 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.815 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.815 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.815 ************************************ 00:04:33.815 START TEST rpc_plugins 00:04:33.815 ************************************ 00:04:33.815 09:42:22 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:33.815 09:42:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.815 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.815 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.815 09:42:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.815 09:42:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.815 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.815 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.815 09:42:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.815 { 00:04:33.815 "name": "Malloc1", 00:04:33.815 "aliases": [ 00:04:33.815 "66ddee87-200f-4c5c-b114-c33f32f40565" 00:04:33.815 ], 00:04:33.815 "product_name": "Malloc disk", 00:04:33.815 "block_size": 4096, 00:04:33.815 "num_blocks": 256, 00:04:33.815 "uuid": "66ddee87-200f-4c5c-b114-c33f32f40565", 00:04:33.815 "assigned_rate_limits": { 00:04:33.815 "rw_ios_per_sec": 0, 00:04:33.815 "rw_mbytes_per_sec": 0, 00:04:33.815 "r_mbytes_per_sec": 0, 00:04:33.815 "w_mbytes_per_sec": 0 00:04:33.815 }, 00:04:33.815 "claimed": false, 00:04:33.815 "zoned": false, 00:04:33.815 "supported_io_types": { 00:04:33.815 "read": true, 00:04:33.815 "write": true, 00:04:33.815 "unmap": true, 00:04:33.815 "write_zeroes": true, 00:04:33.815 "flush": true, 00:04:33.815 "reset": true, 00:04:33.815 "compare": false, 00:04:33.815 "compare_and_write": false, 00:04:33.815 "abort": true, 00:04:33.815 "nvme_admin": false, 00:04:33.815 "nvme_io": false 00:04:33.815 }, 00:04:33.815 "memory_domains": [ 00:04:33.815 { 00:04:33.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.815 "dma_device_type": 2 00:04:33.815 } 00:04:33.815 ], 00:04:33.815 "driver_specific": {} 00:04:33.815 } 00:04:33.815 ]' 00:04:33.815 09:42:22 -- rpc/rpc.sh@32 -- # jq length 00:04:33.815 09:42:22 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.815 09:42:22 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.815 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.815 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.815 09:42:22 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.815 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.815 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.816 09:42:22 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.816 09:42:22 -- rpc/rpc.sh@36 -- # jq length 00:04:33.816 ************************************ 00:04:33.816 END TEST rpc_plugins 00:04:33.816 ************************************ 00:04:33.816 09:42:22 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.816 00:04:33.816 real 0m0.110s 00:04:33.816 user 0m0.069s 00:04:33.816 sys 0m0.011s 00:04:33.816 09:42:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.816 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 09:42:22 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.816 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.816 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.816 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 ************************************ 00:04:33.816 START TEST rpc_trace_cmd_test 00:04:33.816 ************************************ 00:04:33.816 09:42:22 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:33.816 09:42:22 -- rpc/rpc.sh@40 -- # local info 00:04:33.816 09:42:22 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.816 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.816 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.816 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.816 09:42:22 -- rpc/rpc.sh@42 -- # info='{ 00:04:33.816 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56180", 00:04:33.816 "tpoint_group_mask": "0x8", 00:04:33.816 "iscsi_conn": { 00:04:33.816 "mask": "0x2", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "scsi": { 00:04:33.816 "mask": "0x4", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "bdev": { 00:04:33.816 "mask": "0x8", 00:04:33.816 "tpoint_mask": "0xffffffffffffffff" 00:04:33.816 }, 00:04:33.816 "nvmf_rdma": { 00:04:33.816 "mask": "0x10", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "nvmf_tcp": { 00:04:33.816 "mask": "0x20", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "ftl": { 00:04:33.816 "mask": "0x40", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "blobfs": { 00:04:33.816 "mask": "0x80", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "dsa": { 00:04:33.816 "mask": "0x200", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "thread": { 00:04:33.816 "mask": "0x400", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "nvme_pcie": { 00:04:33.816 "mask": "0x800", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "iaa": { 00:04:33.816 "mask": "0x1000", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "nvme_tcp": { 00:04:33.816 "mask": "0x2000", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 }, 00:04:33.816 "bdev_nvme": { 00:04:33.816 "mask": "0x4000", 00:04:33.816 "tpoint_mask": "0x0" 00:04:33.816 } 00:04:33.816 }' 00:04:33.816 09:42:22 -- rpc/rpc.sh@43 -- # jq length 00:04:33.816 09:42:22 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:33.816 09:42:22 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.074 09:42:22 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.074 09:42:22 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.074 09:42:22 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.074 09:42:22 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.074 09:42:22 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.074 09:42:22 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.074 ************************************ 00:04:34.074 END TEST rpc_trace_cmd_test 00:04:34.074 ************************************ 00:04:34.074 09:42:22 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.074 00:04:34.074 real 0m0.161s 00:04:34.074 user 0m0.133s 00:04:34.074 sys 0m0.018s 00:04:34.074 09:42:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.074 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.074 09:42:22 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.074 09:42:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.074 09:42:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.074 09:42:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.074 09:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.074 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.074 ************************************ 00:04:34.074 START TEST rpc_daemon_integrity 00:04:34.074 ************************************ 00:04:34.074 09:42:22 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:34.074 09:42:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.074 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.074 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.074 09:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.074 09:42:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.074 09:42:22 -- rpc/rpc.sh@13 -- # jq length 00:04:34.074 09:42:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.074 09:42:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.075 09:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.075 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.075 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.075 09:42:23 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.075 09:42:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.075 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.075 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.075 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.075 09:42:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.075 { 00:04:34.075 "name": "Malloc2", 00:04:34.075 "aliases": [ 00:04:34.075 "20f872c4-87c4-4dce-b48c-fb54b01efb4f" 00:04:34.075 ], 00:04:34.075 "product_name": "Malloc disk", 00:04:34.075 "block_size": 512, 00:04:34.075 "num_blocks": 16384, 00:04:34.075 "uuid": "20f872c4-87c4-4dce-b48c-fb54b01efb4f", 00:04:34.075 "assigned_rate_limits": { 00:04:34.075 "rw_ios_per_sec": 0, 00:04:34.075 "rw_mbytes_per_sec": 0, 00:04:34.075 "r_mbytes_per_sec": 0, 00:04:34.075 "w_mbytes_per_sec": 0 00:04:34.075 }, 00:04:34.075 "claimed": false, 00:04:34.075 "zoned": false, 00:04:34.075 "supported_io_types": { 00:04:34.075 "read": true, 00:04:34.075 "write": true, 00:04:34.075 "unmap": true, 00:04:34.075 "write_zeroes": true, 00:04:34.075 "flush": true, 00:04:34.075 "reset": true, 00:04:34.075 "compare": false, 00:04:34.075 "compare_and_write": false, 00:04:34.075 "abort": true, 00:04:34.075 "nvme_admin": false, 00:04:34.075 "nvme_io": false 00:04:34.075 }, 00:04:34.075 "memory_domains": [ 00:04:34.075 { 00:04:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.075 "dma_device_type": 2 00:04:34.075 } 00:04:34.075 ], 00:04:34.075 "driver_specific": {} 00:04:34.075 } 00:04:34.075 ]' 00:04:34.075 09:42:23 -- rpc/rpc.sh@17 -- # jq length 00:04:34.075 09:42:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.075 09:42:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.075 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.075 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.075 [2024-12-15 09:42:23.062507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.075 [2024-12-15 09:42:23.062559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.075 [2024-12-15 09:42:23.062581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:34.075 [2024-12-15 09:42:23.062594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.075 [2024-12-15 09:42:23.064270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.075 [2024-12-15 09:42:23.064300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.075 Passthru0 00:04:34.075 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.075 09:42:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.075 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.075 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.075 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.075 09:42:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.075 { 00:04:34.075 "name": "Malloc2", 00:04:34.075 "aliases": [ 00:04:34.075 "20f872c4-87c4-4dce-b48c-fb54b01efb4f" 00:04:34.075 ], 00:04:34.075 "product_name": "Malloc disk", 00:04:34.075 "block_size": 512, 00:04:34.075 "num_blocks": 16384, 00:04:34.075 "uuid": "20f872c4-87c4-4dce-b48c-fb54b01efb4f", 00:04:34.075 "assigned_rate_limits": { 00:04:34.075 "rw_ios_per_sec": 0, 00:04:34.075 "rw_mbytes_per_sec": 0, 00:04:34.075 "r_mbytes_per_sec": 0, 00:04:34.075 "w_mbytes_per_sec": 0 00:04:34.075 }, 00:04:34.075 "claimed": true, 00:04:34.075 "claim_type": "exclusive_write", 00:04:34.075 "zoned": false, 00:04:34.075 "supported_io_types": { 00:04:34.075 "read": true, 00:04:34.075 "write": true, 00:04:34.075 "unmap": true, 00:04:34.075 "write_zeroes": true, 00:04:34.075 "flush": true, 00:04:34.075 "reset": true, 00:04:34.075 "compare": false, 00:04:34.075 "compare_and_write": false, 00:04:34.075 "abort": true, 00:04:34.075 "nvme_admin": false, 00:04:34.075 "nvme_io": false 00:04:34.075 }, 00:04:34.075 "memory_domains": [ 00:04:34.075 { 00:04:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.075 "dma_device_type": 2 00:04:34.075 } 00:04:34.075 ], 00:04:34.075 "driver_specific": {} 00:04:34.075 }, 00:04:34.075 { 00:04:34.075 "name": "Passthru0", 00:04:34.075 "aliases": [ 00:04:34.075 "77b57648-d932-5f5e-af74-2a4dadf071cd" 00:04:34.075 ], 00:04:34.075 "product_name": "passthru", 00:04:34.075 "block_size": 512, 00:04:34.075 "num_blocks": 16384, 00:04:34.075 "uuid": "77b57648-d932-5f5e-af74-2a4dadf071cd", 00:04:34.075 "assigned_rate_limits": { 00:04:34.075 "rw_ios_per_sec": 0, 00:04:34.075 "rw_mbytes_per_sec": 0, 00:04:34.075 "r_mbytes_per_sec": 0, 00:04:34.075 "w_mbytes_per_sec": 0 00:04:34.075 }, 00:04:34.075 "claimed": false, 00:04:34.075 "zoned": false, 00:04:34.075 "supported_io_types": { 00:04:34.075 "read": true, 00:04:34.075 "write": true, 00:04:34.075 "unmap": true, 00:04:34.075 "write_zeroes": true, 00:04:34.075 "flush": true, 00:04:34.075 "reset": true, 00:04:34.075 "compare": false, 00:04:34.075 "compare_and_write": false, 00:04:34.075 "abort": true, 00:04:34.075 "nvme_admin": false, 00:04:34.075 "nvme_io": false 00:04:34.075 }, 00:04:34.075 "memory_domains": [ 00:04:34.075 { 00:04:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.075 "dma_device_type": 2 00:04:34.075 } 00:04:34.075 ], 00:04:34.075 "driver_specific": { 00:04:34.075 "passthru": { 00:04:34.075 "name": "Passthru0", 00:04:34.075 "base_bdev_name": "Malloc2" 00:04:34.075 } 00:04:34.075 } 00:04:34.075 } 00:04:34.075 ]' 00:04:34.075 09:42:23 -- rpc/rpc.sh@21 -- # jq length 00:04:34.333 09:42:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.333 09:42:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.333 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.333 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.333 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.333 09:42:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.333 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.333 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.333 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.333 09:42:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.333 09:42:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.333 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.333 09:42:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.333 09:42:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.333 09:42:23 -- rpc/rpc.sh@26 -- # jq length 00:04:34.333 ************************************ 00:04:34.333 END TEST rpc_daemon_integrity 00:04:34.333 ************************************ 00:04:34.333 09:42:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.333 00:04:34.333 real 0m0.225s 00:04:34.333 user 0m0.124s 00:04:34.333 sys 0m0.026s 00:04:34.333 09:42:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.333 09:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:34.333 09:42:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.333 09:42:23 -- rpc/rpc.sh@84 -- # killprocess 56180 00:04:34.333 09:42:23 -- common/autotest_common.sh@936 -- # '[' -z 56180 ']' 00:04:34.333 09:42:23 -- common/autotest_common.sh@940 -- # kill -0 56180 00:04:34.333 09:42:23 -- common/autotest_common.sh@941 -- # uname 00:04:34.333 09:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:34.333 09:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56180 00:04:34.333 killing process with pid 56180 00:04:34.333 09:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:34.333 09:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:34.333 09:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56180' 00:04:34.333 09:42:23 -- common/autotest_common.sh@955 -- # kill 56180 00:04:34.333 09:42:23 -- common/autotest_common.sh@960 -- # wait 56180 00:04:35.710 00:04:35.710 real 0m3.069s 00:04:35.710 user 0m3.441s 00:04:35.710 sys 0m0.554s 00:04:35.710 09:42:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.710 ************************************ 00:04:35.710 END TEST rpc 00:04:35.710 ************************************ 00:04:35.710 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.710 09:42:24 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.710 09:42:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.710 09:42:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.710 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.710 ************************************ 00:04:35.710 START TEST rpc_client 00:04:35.710 ************************************ 00:04:35.710 09:42:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.710 * Looking for test storage... 00:04:35.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:35.710 09:42:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.710 09:42:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.710 09:42:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.710 09:42:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.710 09:42:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.710 09:42:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.710 09:42:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.710 09:42:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.710 09:42:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.710 09:42:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.710 09:42:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.710 09:42:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.710 09:42:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.710 09:42:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.710 09:42:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.710 09:42:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.710 09:42:24 -- scripts/common.sh@344 -- # : 1 00:04:35.710 09:42:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.710 09:42:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.710 09:42:24 -- scripts/common.sh@364 -- # decimal 1 00:04:35.710 09:42:24 -- scripts/common.sh@352 -- # local d=1 00:04:35.710 09:42:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.710 09:42:24 -- scripts/common.sh@354 -- # echo 1 00:04:35.710 09:42:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.710 09:42:24 -- scripts/common.sh@365 -- # decimal 2 00:04:35.710 09:42:24 -- scripts/common.sh@352 -- # local d=2 00:04:35.710 09:42:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.710 09:42:24 -- scripts/common.sh@354 -- # echo 2 00:04:35.710 09:42:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.710 09:42:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.710 09:42:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.710 09:42:24 -- scripts/common.sh@367 -- # return 0 00:04:35.710 09:42:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.710 09:42:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.710 --rc genhtml_branch_coverage=1 00:04:35.710 --rc genhtml_function_coverage=1 00:04:35.710 --rc genhtml_legend=1 00:04:35.710 --rc geninfo_all_blocks=1 00:04:35.710 --rc geninfo_unexecuted_blocks=1 00:04:35.710 00:04:35.710 ' 00:04:35.710 09:42:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.710 --rc genhtml_branch_coverage=1 00:04:35.711 --rc genhtml_function_coverage=1 00:04:35.711 --rc genhtml_legend=1 00:04:35.711 --rc geninfo_all_blocks=1 00:04:35.711 --rc geninfo_unexecuted_blocks=1 00:04:35.711 00:04:35.711 ' 00:04:35.711 09:42:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.711 --rc genhtml_branch_coverage=1 00:04:35.711 --rc genhtml_function_coverage=1 00:04:35.711 --rc genhtml_legend=1 00:04:35.711 --rc geninfo_all_blocks=1 00:04:35.711 --rc geninfo_unexecuted_blocks=1 00:04:35.711 00:04:35.711 ' 00:04:35.711 09:42:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.711 --rc genhtml_branch_coverage=1 00:04:35.711 --rc genhtml_function_coverage=1 00:04:35.711 --rc genhtml_legend=1 00:04:35.711 --rc geninfo_all_blocks=1 00:04:35.711 --rc geninfo_unexecuted_blocks=1 00:04:35.711 00:04:35.711 ' 00:04:35.711 09:42:24 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:35.711 OK 00:04:35.711 09:42:24 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:35.711 00:04:35.711 real 0m0.169s 00:04:35.711 user 0m0.098s 00:04:35.711 sys 0m0.078s 00:04:35.711 09:42:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.711 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.711 ************************************ 00:04:35.711 END TEST rpc_client 00:04:35.711 ************************************ 00:04:35.711 09:42:24 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.711 09:42:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.711 09:42:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.711 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.711 ************************************ 00:04:35.711 START TEST json_config 00:04:35.711 ************************************ 00:04:35.711 09:42:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.711 09:42:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.711 09:42:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.711 09:42:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.971 09:42:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.972 09:42:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.972 09:42:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.972 09:42:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.972 09:42:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.972 09:42:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.972 09:42:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.972 09:42:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.972 09:42:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.972 09:42:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.972 09:42:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.972 09:42:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.972 09:42:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.972 09:42:24 -- scripts/common.sh@344 -- # : 1 00:04:35.972 09:42:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.972 09:42:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.972 09:42:24 -- scripts/common.sh@364 -- # decimal 1 00:04:35.972 09:42:24 -- scripts/common.sh@352 -- # local d=1 00:04:35.972 09:42:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.972 09:42:24 -- scripts/common.sh@354 -- # echo 1 00:04:35.972 09:42:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.972 09:42:24 -- scripts/common.sh@365 -- # decimal 2 00:04:35.972 09:42:24 -- scripts/common.sh@352 -- # local d=2 00:04:35.972 09:42:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.972 09:42:24 -- scripts/common.sh@354 -- # echo 2 00:04:35.972 09:42:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.972 09:42:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.972 09:42:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.972 09:42:24 -- scripts/common.sh@367 -- # return 0 00:04:35.972 09:42:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.972 --rc genhtml_branch_coverage=1 00:04:35.972 --rc genhtml_function_coverage=1 00:04:35.972 --rc genhtml_legend=1 00:04:35.972 --rc geninfo_all_blocks=1 00:04:35.972 --rc geninfo_unexecuted_blocks=1 00:04:35.972 00:04:35.972 ' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.972 --rc genhtml_branch_coverage=1 00:04:35.972 --rc genhtml_function_coverage=1 00:04:35.972 --rc genhtml_legend=1 00:04:35.972 --rc geninfo_all_blocks=1 00:04:35.972 --rc geninfo_unexecuted_blocks=1 00:04:35.972 00:04:35.972 ' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.972 --rc genhtml_branch_coverage=1 00:04:35.972 --rc genhtml_function_coverage=1 00:04:35.972 --rc genhtml_legend=1 00:04:35.972 --rc geninfo_all_blocks=1 00:04:35.972 --rc geninfo_unexecuted_blocks=1 00:04:35.972 00:04:35.972 ' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.972 --rc genhtml_branch_coverage=1 00:04:35.972 --rc genhtml_function_coverage=1 00:04:35.972 --rc genhtml_legend=1 00:04:35.972 --rc geninfo_all_blocks=1 00:04:35.972 --rc geninfo_unexecuted_blocks=1 00:04:35.972 00:04:35.972 ' 00:04:35.972 09:42:24 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.972 09:42:24 -- nvmf/common.sh@7 -- # uname -s 00:04:35.972 09:42:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.972 09:42:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.972 09:42:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.972 09:42:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.972 09:42:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.972 09:42:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.972 09:42:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.972 09:42:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.972 09:42:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.972 09:42:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.972 09:42:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:04:35.972 09:42:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:04:35.972 09:42:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.972 09:42:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.972 09:42:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.972 09:42:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.972 09:42:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.972 09:42:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.972 09:42:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.972 09:42:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.972 09:42:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.972 09:42:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.972 09:42:24 -- paths/export.sh@5 -- # export PATH 00:04:35.972 09:42:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.972 09:42:24 -- nvmf/common.sh@46 -- # : 0 00:04:35.972 09:42:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:35.972 09:42:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:35.972 09:42:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:35.972 09:42:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.972 09:42:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.972 09:42:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:35.972 09:42:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:35.972 09:42:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:35.972 09:42:24 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:35.972 09:42:24 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:35.972 09:42:24 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:35.972 09:42:24 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:35.972 09:42:24 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:35.972 WARNING: No tests are enabled so not running JSON configuration tests 00:04:35.972 09:42:24 -- json_config/json_config.sh@27 -- # exit 0 00:04:35.972 00:04:35.972 real 0m0.133s 00:04:35.972 user 0m0.082s 00:04:35.972 sys 0m0.052s 00:04:35.972 09:42:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.972 ************************************ 00:04:35.972 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 END TEST json_config 00:04:35.972 ************************************ 00:04:35.972 09:42:24 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.972 09:42:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.972 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 ************************************ 00:04:35.972 START TEST json_config_extra_key 00:04:35.972 ************************************ 00:04:35.972 09:42:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.972 09:42:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.972 09:42:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.972 09:42:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.972 09:42:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.972 09:42:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.972 09:42:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.972 09:42:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.972 09:42:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.972 09:42:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.972 09:42:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.972 09:42:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.972 09:42:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.972 09:42:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.972 09:42:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.972 09:42:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.972 09:42:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.972 09:42:24 -- scripts/common.sh@344 -- # : 1 00:04:35.972 09:42:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.972 09:42:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.972 09:42:24 -- scripts/common.sh@364 -- # decimal 1 00:04:35.972 09:42:24 -- scripts/common.sh@352 -- # local d=1 00:04:35.972 09:42:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.972 09:42:24 -- scripts/common.sh@354 -- # echo 1 00:04:35.972 09:42:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.972 09:42:24 -- scripts/common.sh@365 -- # decimal 2 00:04:35.972 09:42:24 -- scripts/common.sh@352 -- # local d=2 00:04:35.972 09:42:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.972 09:42:24 -- scripts/common.sh@354 -- # echo 2 00:04:35.972 09:42:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.972 09:42:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.972 09:42:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.972 09:42:24 -- scripts/common.sh@367 -- # return 0 00:04:35.973 09:42:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.973 09:42:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.973 --rc genhtml_branch_coverage=1 00:04:35.973 --rc genhtml_function_coverage=1 00:04:35.973 --rc genhtml_legend=1 00:04:35.973 --rc geninfo_all_blocks=1 00:04:35.973 --rc geninfo_unexecuted_blocks=1 00:04:35.973 00:04:35.973 ' 00:04:35.973 09:42:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.973 --rc genhtml_branch_coverage=1 00:04:35.973 --rc genhtml_function_coverage=1 00:04:35.973 --rc genhtml_legend=1 00:04:35.973 --rc geninfo_all_blocks=1 00:04:35.973 --rc geninfo_unexecuted_blocks=1 00:04:35.973 00:04:35.973 ' 00:04:35.973 09:42:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.973 --rc genhtml_branch_coverage=1 00:04:35.973 --rc genhtml_function_coverage=1 00:04:35.973 --rc genhtml_legend=1 00:04:35.973 --rc geninfo_all_blocks=1 00:04:35.973 --rc geninfo_unexecuted_blocks=1 00:04:35.973 00:04:35.973 ' 00:04:35.973 09:42:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.973 --rc genhtml_branch_coverage=1 00:04:35.973 --rc genhtml_function_coverage=1 00:04:35.973 --rc genhtml_legend=1 00:04:35.973 --rc geninfo_all_blocks=1 00:04:35.973 --rc geninfo_unexecuted_blocks=1 00:04:35.973 00:04:35.973 ' 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.973 09:42:24 -- nvmf/common.sh@7 -- # uname -s 00:04:35.973 09:42:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.973 09:42:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.973 09:42:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.973 09:42:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.973 09:42:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.973 09:42:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.973 09:42:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.973 09:42:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.973 09:42:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.973 09:42:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.973 09:42:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:04:35.973 09:42:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d99513a-c383-4fd7-ab90-5cd725b0d4d6 00:04:35.973 09:42:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.973 09:42:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.973 09:42:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.973 09:42:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.973 09:42:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.973 09:42:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.973 09:42:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.973 09:42:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.973 09:42:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.973 09:42:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.973 09:42:24 -- paths/export.sh@5 -- # export PATH 00:04:35.973 09:42:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.973 09:42:24 -- nvmf/common.sh@46 -- # : 0 00:04:35.973 09:42:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:35.973 09:42:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:35.973 09:42:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:35.973 09:42:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.973 09:42:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.973 09:42:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:35.973 09:42:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:35.973 09:42:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.973 INFO: launching applications... 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56479 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:35.973 Waiting for target to run... 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56479 /var/tmp/spdk_tgt.sock 00:04:35.973 09:42:24 -- common/autotest_common.sh@829 -- # '[' -z 56479 ']' 00:04:35.973 09:42:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.973 09:42:24 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.973 09:42:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.973 09:42:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.973 09:42:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.973 09:42:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.235 [2024-12-15 09:42:25.024307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:36.235 [2024-12-15 09:42:25.024516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56479 ] 00:04:36.495 [2024-12-15 09:42:25.310931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.495 [2024-12-15 09:42:25.479450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.495 [2024-12-15 09:42:25.479785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.881 09:42:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.881 09:42:26 -- common/autotest_common.sh@862 -- # return 0 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:37.881 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:37.881 INFO: shutting down applications... 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56479 ]] 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56479 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56479 00:04:37.881 09:42:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:38.142 09:42:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:38.142 09:42:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:38.142 09:42:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56479 00:04:38.142 09:42:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:38.709 09:42:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:38.709 09:42:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:38.709 09:42:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56479 00:04:38.709 09:42:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:39.276 09:42:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:39.276 09:42:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.276 09:42:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56479 00:04:39.276 09:42:28 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:39.918 SPDK target shutdown done 00:04:39.918 Success 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56479 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:39.918 09:42:28 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:39.918 00:04:39.918 real 0m3.736s 00:04:39.918 user 0m3.360s 00:04:39.918 sys 0m0.396s 00:04:39.918 09:42:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.918 ************************************ 00:04:39.918 END TEST json_config_extra_key 00:04:39.918 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:39.918 ************************************ 00:04:39.918 09:42:28 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.918 09:42:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.918 09:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.918 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:39.918 ************************************ 00:04:39.918 START TEST alias_rpc 00:04:39.918 ************************************ 00:04:39.918 09:42:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.918 * Looking for test storage... 00:04:39.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:39.918 09:42:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.918 09:42:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.918 09:42:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.918 09:42:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.919 09:42:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.919 09:42:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.919 09:42:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.919 09:42:28 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.919 09:42:28 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.919 09:42:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.919 09:42:28 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.919 09:42:28 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.919 09:42:28 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.919 09:42:28 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.919 09:42:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.919 09:42:28 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.919 09:42:28 -- scripts/common.sh@344 -- # : 1 00:04:39.919 09:42:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.919 09:42:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.919 09:42:28 -- scripts/common.sh@364 -- # decimal 1 00:04:39.919 09:42:28 -- scripts/common.sh@352 -- # local d=1 00:04:39.919 09:42:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.919 09:42:28 -- scripts/common.sh@354 -- # echo 1 00:04:39.919 09:42:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.919 09:42:28 -- scripts/common.sh@365 -- # decimal 2 00:04:39.919 09:42:28 -- scripts/common.sh@352 -- # local d=2 00:04:39.919 09:42:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.919 09:42:28 -- scripts/common.sh@354 -- # echo 2 00:04:39.919 09:42:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.919 09:42:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.919 09:42:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.919 09:42:28 -- scripts/common.sh@367 -- # return 0 00:04:39.919 09:42:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.919 09:42:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.919 --rc genhtml_branch_coverage=1 00:04:39.919 --rc genhtml_function_coverage=1 00:04:39.919 --rc genhtml_legend=1 00:04:39.919 --rc geninfo_all_blocks=1 00:04:39.919 --rc geninfo_unexecuted_blocks=1 00:04:39.919 00:04:39.919 ' 00:04:39.919 09:42:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.919 --rc genhtml_branch_coverage=1 00:04:39.919 --rc genhtml_function_coverage=1 00:04:39.919 --rc genhtml_legend=1 00:04:39.919 --rc geninfo_all_blocks=1 00:04:39.919 --rc geninfo_unexecuted_blocks=1 00:04:39.919 00:04:39.919 ' 00:04:39.919 09:42:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.919 --rc genhtml_branch_coverage=1 00:04:39.919 --rc genhtml_function_coverage=1 00:04:39.919 --rc genhtml_legend=1 00:04:39.919 --rc geninfo_all_blocks=1 00:04:39.919 --rc geninfo_unexecuted_blocks=1 00:04:39.919 00:04:39.919 ' 00:04:39.919 09:42:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.919 --rc genhtml_branch_coverage=1 00:04:39.919 --rc genhtml_function_coverage=1 00:04:39.919 --rc genhtml_legend=1 00:04:39.919 --rc geninfo_all_blocks=1 00:04:39.919 --rc geninfo_unexecuted_blocks=1 00:04:39.919 00:04:39.919 ' 00:04:39.919 09:42:28 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:39.919 09:42:28 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56579 00:04:39.919 09:42:28 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56579 00:04:39.919 09:42:28 -- common/autotest_common.sh@829 -- # '[' -z 56579 ']' 00:04:39.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.919 09:42:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.919 09:42:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.919 09:42:28 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.919 09:42:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.919 09:42:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.919 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:04:39.919 [2024-12-15 09:42:28.833469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:39.919 [2024-12-15 09:42:28.833579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56579 ] 00:04:40.191 [2024-12-15 09:42:28.980821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.191 [2024-12-15 09:42:29.119894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:40.191 [2024-12-15 09:42:29.120040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.758 09:42:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.758 09:42:29 -- common/autotest_common.sh@862 -- # return 0 00:04:40.758 09:42:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:41.016 09:42:29 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56579 00:04:41.016 09:42:29 -- common/autotest_common.sh@936 -- # '[' -z 56579 ']' 00:04:41.016 09:42:29 -- common/autotest_common.sh@940 -- # kill -0 56579 00:04:41.016 09:42:29 -- common/autotest_common.sh@941 -- # uname 00:04:41.016 09:42:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.016 09:42:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56579 00:04:41.016 killing process with pid 56579 00:04:41.016 09:42:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.016 09:42:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.016 09:42:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56579' 00:04:41.016 09:42:29 -- common/autotest_common.sh@955 -- # kill 56579 00:04:41.016 09:42:29 -- common/autotest_common.sh@960 -- # wait 56579 00:04:42.389 00:04:42.389 real 0m2.418s 00:04:42.389 user 0m2.468s 00:04:42.389 sys 0m0.374s 00:04:42.389 09:42:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.389 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.389 ************************************ 00:04:42.389 END TEST alias_rpc 00:04:42.389 ************************************ 00:04:42.389 09:42:31 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:42.389 09:42:31 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.389 09:42:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.389 09:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.389 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.389 ************************************ 00:04:42.389 START TEST spdkcli_tcp 00:04:42.389 ************************************ 00:04:42.389 09:42:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:42.389 * Looking for test storage... 00:04:42.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:42.389 09:42:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.389 09:42:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.389 09:42:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.389 09:42:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.389 09:42:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.389 09:42:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.389 09:42:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.389 09:42:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.389 09:42:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.389 09:42:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.389 09:42:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.389 09:42:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.389 09:42:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.389 09:42:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.389 09:42:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.389 09:42:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.389 09:42:31 -- scripts/common.sh@344 -- # : 1 00:04:42.389 09:42:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.389 09:42:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.389 09:42:31 -- scripts/common.sh@364 -- # decimal 1 00:04:42.389 09:42:31 -- scripts/common.sh@352 -- # local d=1 00:04:42.389 09:42:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.389 09:42:31 -- scripts/common.sh@354 -- # echo 1 00:04:42.389 09:42:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.389 09:42:31 -- scripts/common.sh@365 -- # decimal 2 00:04:42.389 09:42:31 -- scripts/common.sh@352 -- # local d=2 00:04:42.389 09:42:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.389 09:42:31 -- scripts/common.sh@354 -- # echo 2 00:04:42.389 09:42:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.389 09:42:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.389 09:42:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.389 09:42:31 -- scripts/common.sh@367 -- # return 0 00:04:42.389 09:42:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.389 09:42:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.389 --rc genhtml_branch_coverage=1 00:04:42.390 --rc genhtml_function_coverage=1 00:04:42.390 --rc genhtml_legend=1 00:04:42.390 --rc geninfo_all_blocks=1 00:04:42.390 --rc geninfo_unexecuted_blocks=1 00:04:42.390 00:04:42.390 ' 00:04:42.390 09:42:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.390 --rc genhtml_branch_coverage=1 00:04:42.390 --rc genhtml_function_coverage=1 00:04:42.390 --rc genhtml_legend=1 00:04:42.390 --rc geninfo_all_blocks=1 00:04:42.390 --rc geninfo_unexecuted_blocks=1 00:04:42.390 00:04:42.390 ' 00:04:42.390 09:42:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.390 --rc genhtml_branch_coverage=1 00:04:42.390 --rc genhtml_function_coverage=1 00:04:42.390 --rc genhtml_legend=1 00:04:42.390 --rc geninfo_all_blocks=1 00:04:42.390 --rc geninfo_unexecuted_blocks=1 00:04:42.390 00:04:42.390 ' 00:04:42.390 09:42:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.390 --rc genhtml_branch_coverage=1 00:04:42.390 --rc genhtml_function_coverage=1 00:04:42.390 --rc genhtml_legend=1 00:04:42.390 --rc geninfo_all_blocks=1 00:04:42.390 --rc geninfo_unexecuted_blocks=1 00:04:42.390 00:04:42.390 ' 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:42.390 09:42:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:42.390 09:42:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.390 09:42:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.390 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56674 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 56674 00:04:42.390 09:42:31 -- common/autotest_common.sh@829 -- # '[' -z 56674 ']' 00:04:42.390 09:42:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.390 09:42:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.390 09:42:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.390 09:42:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.390 09:42:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.390 09:42:31 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.390 [2024-12-15 09:42:31.280282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.390 [2024-12-15 09:42:31.280390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56674 ] 00:04:42.648 [2024-12-15 09:42:31.427402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.648 [2024-12-15 09:42:31.606780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.648 [2024-12-15 09:42:31.607360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.648 [2024-12-15 09:42:31.607564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.023 09:42:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.023 09:42:32 -- common/autotest_common.sh@862 -- # return 0 00:04:44.023 09:42:32 -- spdkcli/tcp.sh@31 -- # socat_pid=56693 00:04:44.023 09:42:32 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:44.023 09:42:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:44.023 [ 00:04:44.023 "bdev_malloc_delete", 00:04:44.023 "bdev_malloc_create", 00:04:44.023 "bdev_null_resize", 00:04:44.023 "bdev_null_delete", 00:04:44.023 "bdev_null_create", 00:04:44.023 "bdev_nvme_cuse_unregister", 00:04:44.023 "bdev_nvme_cuse_register", 00:04:44.023 "bdev_opal_new_user", 00:04:44.023 "bdev_opal_set_lock_state", 00:04:44.023 "bdev_opal_delete", 00:04:44.023 "bdev_opal_get_info", 00:04:44.023 "bdev_opal_create", 00:04:44.023 "bdev_nvme_opal_revert", 00:04:44.023 "bdev_nvme_opal_init", 00:04:44.023 "bdev_nvme_send_cmd", 00:04:44.023 "bdev_nvme_get_path_iostat", 00:04:44.023 "bdev_nvme_get_mdns_discovery_info", 00:04:44.023 "bdev_nvme_stop_mdns_discovery", 00:04:44.023 "bdev_nvme_start_mdns_discovery", 00:04:44.023 "bdev_nvme_set_multipath_policy", 00:04:44.023 "bdev_nvme_set_preferred_path", 00:04:44.023 "bdev_nvme_get_io_paths", 00:04:44.023 "bdev_nvme_remove_error_injection", 00:04:44.023 "bdev_nvme_add_error_injection", 00:04:44.023 "bdev_nvme_get_discovery_info", 00:04:44.023 "bdev_nvme_stop_discovery", 00:04:44.023 "bdev_nvme_start_discovery", 00:04:44.023 "bdev_nvme_get_controller_health_info", 00:04:44.023 "bdev_nvme_disable_controller", 00:04:44.023 "bdev_nvme_enable_controller", 00:04:44.023 "bdev_nvme_reset_controller", 00:04:44.023 "bdev_nvme_get_transport_statistics", 00:04:44.023 "bdev_nvme_apply_firmware", 00:04:44.023 "bdev_nvme_detach_controller", 00:04:44.023 "bdev_nvme_get_controllers", 00:04:44.023 "bdev_nvme_attach_controller", 00:04:44.023 "bdev_nvme_set_hotplug", 00:04:44.023 "bdev_nvme_set_options", 00:04:44.023 "bdev_passthru_delete", 00:04:44.023 "bdev_passthru_create", 00:04:44.023 "bdev_lvol_grow_lvstore", 00:04:44.023 "bdev_lvol_get_lvols", 00:04:44.023 "bdev_lvol_get_lvstores", 00:04:44.023 "bdev_lvol_delete", 00:04:44.023 "bdev_lvol_set_read_only", 00:04:44.023 "bdev_lvol_resize", 00:04:44.023 "bdev_lvol_decouple_parent", 00:04:44.023 "bdev_lvol_inflate", 00:04:44.023 "bdev_lvol_rename", 00:04:44.023 "bdev_lvol_clone_bdev", 00:04:44.023 "bdev_lvol_clone", 00:04:44.023 "bdev_lvol_snapshot", 00:04:44.023 "bdev_lvol_create", 00:04:44.023 "bdev_lvol_delete_lvstore", 00:04:44.023 "bdev_lvol_rename_lvstore", 00:04:44.023 "bdev_lvol_create_lvstore", 00:04:44.023 "bdev_raid_set_options", 00:04:44.023 "bdev_raid_remove_base_bdev", 00:04:44.023 "bdev_raid_add_base_bdev", 00:04:44.023 "bdev_raid_delete", 00:04:44.024 "bdev_raid_create", 00:04:44.024 "bdev_raid_get_bdevs", 00:04:44.024 "bdev_error_inject_error", 00:04:44.024 "bdev_error_delete", 00:04:44.024 "bdev_error_create", 00:04:44.024 "bdev_split_delete", 00:04:44.024 "bdev_split_create", 00:04:44.024 "bdev_delay_delete", 00:04:44.024 "bdev_delay_create", 00:04:44.024 "bdev_delay_update_latency", 00:04:44.024 "bdev_zone_block_delete", 00:04:44.024 "bdev_zone_block_create", 00:04:44.024 "blobfs_create", 00:04:44.024 "blobfs_detect", 00:04:44.024 "blobfs_set_cache_size", 00:04:44.024 "bdev_xnvme_delete", 00:04:44.024 "bdev_xnvme_create", 00:04:44.024 "bdev_aio_delete", 00:04:44.024 "bdev_aio_rescan", 00:04:44.024 "bdev_aio_create", 00:04:44.024 "bdev_ftl_set_property", 00:04:44.024 "bdev_ftl_get_properties", 00:04:44.024 "bdev_ftl_get_stats", 00:04:44.024 "bdev_ftl_unmap", 00:04:44.024 "bdev_ftl_unload", 00:04:44.024 "bdev_ftl_delete", 00:04:44.024 "bdev_ftl_load", 00:04:44.024 "bdev_ftl_create", 00:04:44.024 "bdev_virtio_attach_controller", 00:04:44.024 "bdev_virtio_scsi_get_devices", 00:04:44.024 "bdev_virtio_detach_controller", 00:04:44.024 "bdev_virtio_blk_set_hotplug", 00:04:44.024 "bdev_iscsi_delete", 00:04:44.024 "bdev_iscsi_create", 00:04:44.024 "bdev_iscsi_set_options", 00:04:44.024 "accel_error_inject_error", 00:04:44.024 "ioat_scan_accel_module", 00:04:44.024 "dsa_scan_accel_module", 00:04:44.024 "iaa_scan_accel_module", 00:04:44.024 "iscsi_set_options", 00:04:44.024 "iscsi_get_auth_groups", 00:04:44.024 "iscsi_auth_group_remove_secret", 00:04:44.024 "iscsi_auth_group_add_secret", 00:04:44.024 "iscsi_delete_auth_group", 00:04:44.024 "iscsi_create_auth_group", 00:04:44.024 "iscsi_set_discovery_auth", 00:04:44.024 "iscsi_get_options", 00:04:44.024 "iscsi_target_node_request_logout", 00:04:44.024 "iscsi_target_node_set_redirect", 00:04:44.024 "iscsi_target_node_set_auth", 00:04:44.024 "iscsi_target_node_add_lun", 00:04:44.024 "iscsi_get_connections", 00:04:44.024 "iscsi_portal_group_set_auth", 00:04:44.024 "iscsi_start_portal_group", 00:04:44.024 "iscsi_delete_portal_group", 00:04:44.024 "iscsi_create_portal_group", 00:04:44.024 "iscsi_get_portal_groups", 00:04:44.024 "iscsi_delete_target_node", 00:04:44.024 "iscsi_target_node_remove_pg_ig_maps", 00:04:44.024 "iscsi_target_node_add_pg_ig_maps", 00:04:44.024 "iscsi_create_target_node", 00:04:44.024 "iscsi_get_target_nodes", 00:04:44.024 "iscsi_delete_initiator_group", 00:04:44.024 "iscsi_initiator_group_remove_initiators", 00:04:44.024 "iscsi_initiator_group_add_initiators", 00:04:44.024 "iscsi_create_initiator_group", 00:04:44.024 "iscsi_get_initiator_groups", 00:04:44.024 "nvmf_set_crdt", 00:04:44.024 "nvmf_set_config", 00:04:44.024 "nvmf_set_max_subsystems", 00:04:44.024 "nvmf_subsystem_get_listeners", 00:04:44.024 "nvmf_subsystem_get_qpairs", 00:04:44.024 "nvmf_subsystem_get_controllers", 00:04:44.024 "nvmf_get_stats", 00:04:44.024 "nvmf_get_transports", 00:04:44.024 "nvmf_create_transport", 00:04:44.024 "nvmf_get_targets", 00:04:44.024 "nvmf_delete_target", 00:04:44.024 "nvmf_create_target", 00:04:44.024 "nvmf_subsystem_allow_any_host", 00:04:44.024 "nvmf_subsystem_remove_host", 00:04:44.024 "nvmf_subsystem_add_host", 00:04:44.024 "nvmf_subsystem_remove_ns", 00:04:44.024 "nvmf_subsystem_add_ns", 00:04:44.024 "nvmf_subsystem_listener_set_ana_state", 00:04:44.024 "nvmf_discovery_get_referrals", 00:04:44.024 "nvmf_discovery_remove_referral", 00:04:44.024 "nvmf_discovery_add_referral", 00:04:44.024 "nvmf_subsystem_remove_listener", 00:04:44.024 "nvmf_subsystem_add_listener", 00:04:44.024 "nvmf_delete_subsystem", 00:04:44.024 "nvmf_create_subsystem", 00:04:44.024 "nvmf_get_subsystems", 00:04:44.024 "env_dpdk_get_mem_stats", 00:04:44.024 "nbd_get_disks", 00:04:44.024 "nbd_stop_disk", 00:04:44.024 "nbd_start_disk", 00:04:44.024 "ublk_recover_disk", 00:04:44.024 "ublk_get_disks", 00:04:44.024 "ublk_stop_disk", 00:04:44.024 "ublk_start_disk", 00:04:44.024 "ublk_destroy_target", 00:04:44.024 "ublk_create_target", 00:04:44.024 "virtio_blk_create_transport", 00:04:44.024 "virtio_blk_get_transports", 00:04:44.024 "vhost_controller_set_coalescing", 00:04:44.024 "vhost_get_controllers", 00:04:44.024 "vhost_delete_controller", 00:04:44.024 "vhost_create_blk_controller", 00:04:44.024 "vhost_scsi_controller_remove_target", 00:04:44.024 "vhost_scsi_controller_add_target", 00:04:44.024 "vhost_start_scsi_controller", 00:04:44.024 "vhost_create_scsi_controller", 00:04:44.024 "thread_set_cpumask", 00:04:44.024 "framework_get_scheduler", 00:04:44.024 "framework_set_scheduler", 00:04:44.024 "framework_get_reactors", 00:04:44.024 "thread_get_io_channels", 00:04:44.024 "thread_get_pollers", 00:04:44.024 "thread_get_stats", 00:04:44.024 "framework_monitor_context_switch", 00:04:44.024 "spdk_kill_instance", 00:04:44.024 "log_enable_timestamps", 00:04:44.024 "log_get_flags", 00:04:44.024 "log_clear_flag", 00:04:44.024 "log_set_flag", 00:04:44.024 "log_get_level", 00:04:44.024 "log_set_level", 00:04:44.024 "log_get_print_level", 00:04:44.024 "log_set_print_level", 00:04:44.024 "framework_enable_cpumask_locks", 00:04:44.024 "framework_disable_cpumask_locks", 00:04:44.024 "framework_wait_init", 00:04:44.024 "framework_start_init", 00:04:44.024 "scsi_get_devices", 00:04:44.024 "bdev_get_histogram", 00:04:44.024 "bdev_enable_histogram", 00:04:44.024 "bdev_set_qos_limit", 00:04:44.024 "bdev_set_qd_sampling_period", 00:04:44.024 "bdev_get_bdevs", 00:04:44.024 "bdev_reset_iostat", 00:04:44.024 "bdev_get_iostat", 00:04:44.024 "bdev_examine", 00:04:44.024 "bdev_wait_for_examine", 00:04:44.024 "bdev_set_options", 00:04:44.024 "notify_get_notifications", 00:04:44.024 "notify_get_types", 00:04:44.024 "accel_get_stats", 00:04:44.024 "accel_set_options", 00:04:44.024 "accel_set_driver", 00:04:44.024 "accel_crypto_key_destroy", 00:04:44.024 "accel_crypto_keys_get", 00:04:44.024 "accel_crypto_key_create", 00:04:44.024 "accel_assign_opc", 00:04:44.024 "accel_get_module_info", 00:04:44.024 "accel_get_opc_assignments", 00:04:44.024 "vmd_rescan", 00:04:44.024 "vmd_remove_device", 00:04:44.024 "vmd_enable", 00:04:44.024 "sock_set_default_impl", 00:04:44.024 "sock_impl_set_options", 00:04:44.024 "sock_impl_get_options", 00:04:44.024 "iobuf_get_stats", 00:04:44.024 "iobuf_set_options", 00:04:44.024 "framework_get_pci_devices", 00:04:44.024 "framework_get_config", 00:04:44.024 "framework_get_subsystems", 00:04:44.024 "trace_get_info", 00:04:44.024 "trace_get_tpoint_group_mask", 00:04:44.024 "trace_disable_tpoint_group", 00:04:44.024 "trace_enable_tpoint_group", 00:04:44.024 "trace_clear_tpoint_mask", 00:04:44.024 "trace_set_tpoint_mask", 00:04:44.024 "spdk_get_version", 00:04:44.024 "rpc_get_methods" 00:04:44.024 ] 00:04:44.024 09:42:32 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:44.024 09:42:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.024 09:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:44.024 09:42:32 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:44.024 09:42:32 -- spdkcli/tcp.sh@38 -- # killprocess 56674 00:04:44.024 09:42:32 -- common/autotest_common.sh@936 -- # '[' -z 56674 ']' 00:04:44.024 09:42:32 -- common/autotest_common.sh@940 -- # kill -0 56674 00:04:44.024 09:42:32 -- common/autotest_common.sh@941 -- # uname 00:04:44.024 09:42:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.024 09:42:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56674 00:04:44.024 09:42:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.024 09:42:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.024 09:42:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56674' 00:04:44.024 killing process with pid 56674 00:04:44.024 09:42:33 -- common/autotest_common.sh@955 -- # kill 56674 00:04:44.024 09:42:33 -- common/autotest_common.sh@960 -- # wait 56674 00:04:45.927 00:04:45.927 real 0m3.331s 00:04:45.927 user 0m6.166s 00:04:45.927 sys 0m0.416s 00:04:45.927 09:42:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.927 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.927 ************************************ 00:04:45.927 END TEST spdkcli_tcp 00:04:45.927 ************************************ 00:04:45.927 09:42:34 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.927 09:42:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.927 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.927 ************************************ 00:04:45.927 START TEST dpdk_mem_utility 00:04:45.927 ************************************ 00:04:45.927 09:42:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.927 * Looking for test storage... 00:04:45.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:45.927 09:42:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.927 09:42:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.927 09:42:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.927 09:42:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.927 09:42:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.927 09:42:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.927 09:42:34 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.927 09:42:34 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.927 09:42:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.927 09:42:34 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.927 09:42:34 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.927 09:42:34 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.927 09:42:34 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.927 09:42:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.927 09:42:34 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.927 09:42:34 -- scripts/common.sh@344 -- # : 1 00:04:45.927 09:42:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.927 09:42:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.927 09:42:34 -- scripts/common.sh@364 -- # decimal 1 00:04:45.927 09:42:34 -- scripts/common.sh@352 -- # local d=1 00:04:45.927 09:42:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.927 09:42:34 -- scripts/common.sh@354 -- # echo 1 00:04:45.927 09:42:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.927 09:42:34 -- scripts/common.sh@365 -- # decimal 2 00:04:45.927 09:42:34 -- scripts/common.sh@352 -- # local d=2 00:04:45.927 09:42:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.927 09:42:34 -- scripts/common.sh@354 -- # echo 2 00:04:45.927 09:42:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.927 09:42:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.927 09:42:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.927 09:42:34 -- scripts/common.sh@367 -- # return 0 00:04:45.927 09:42:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.927 --rc genhtml_branch_coverage=1 00:04:45.927 --rc genhtml_function_coverage=1 00:04:45.927 --rc genhtml_legend=1 00:04:45.927 --rc geninfo_all_blocks=1 00:04:45.927 --rc geninfo_unexecuted_blocks=1 00:04:45.927 00:04:45.927 ' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.927 --rc genhtml_branch_coverage=1 00:04:45.927 --rc genhtml_function_coverage=1 00:04:45.927 --rc genhtml_legend=1 00:04:45.927 --rc geninfo_all_blocks=1 00:04:45.927 --rc geninfo_unexecuted_blocks=1 00:04:45.927 00:04:45.927 ' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.927 --rc genhtml_branch_coverage=1 00:04:45.927 --rc genhtml_function_coverage=1 00:04:45.927 --rc genhtml_legend=1 00:04:45.927 --rc geninfo_all_blocks=1 00:04:45.927 --rc geninfo_unexecuted_blocks=1 00:04:45.927 00:04:45.927 ' 00:04:45.927 09:42:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.927 --rc genhtml_branch_coverage=1 00:04:45.927 --rc genhtml_function_coverage=1 00:04:45.927 --rc genhtml_legend=1 00:04:45.927 --rc geninfo_all_blocks=1 00:04:45.927 --rc geninfo_unexecuted_blocks=1 00:04:45.927 00:04:45.927 ' 00:04:45.927 09:42:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.927 09:42:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56786 00:04:45.927 09:42:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56786 00:04:45.927 09:42:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.927 09:42:34 -- common/autotest_common.sh@829 -- # '[' -z 56786 ']' 00:04:45.927 09:42:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.927 09:42:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.927 09:42:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.927 09:42:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.927 09:42:34 -- common/autotest_common.sh@10 -- # set +x 00:04:45.927 [2024-12-15 09:42:34.657534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.927 [2024-12-15 09:42:34.657661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56786 ] 00:04:45.927 [2024-12-15 09:42:34.804061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.185 [2024-12-15 09:42:34.952552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.185 [2024-12-15 09:42:34.952731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.752 09:42:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.752 09:42:35 -- common/autotest_common.sh@862 -- # return 0 00:04:46.752 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.752 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.752 09:42:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.752 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:46.752 { 00:04:46.752 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.752 } 00:04:46.752 09:42:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.752 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.752 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:46.752 1 heaps totaling size 820.000000 MiB 00:04:46.752 size: 820.000000 MiB heap id: 0 00:04:46.752 end heaps---------- 00:04:46.752 8 mempools totaling size 598.116089 MiB 00:04:46.752 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.752 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.752 size: 84.521057 MiB name: bdev_io_56786 00:04:46.752 size: 51.011292 MiB name: evtpool_56786 00:04:46.752 size: 50.003479 MiB name: msgpool_56786 00:04:46.752 size: 21.763794 MiB name: PDU_Pool 00:04:46.752 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.752 size: 0.026123 MiB name: Session_Pool 00:04:46.752 end mempools------- 00:04:46.752 6 memzones totaling size 4.142822 MiB 00:04:46.752 size: 1.000366 MiB name: RG_ring_0_56786 00:04:46.752 size: 1.000366 MiB name: RG_ring_1_56786 00:04:46.752 size: 1.000366 MiB name: RG_ring_4_56786 00:04:46.752 size: 1.000366 MiB name: RG_ring_5_56786 00:04:46.752 size: 0.125366 MiB name: RG_ring_2_56786 00:04:46.752 size: 0.015991 MiB name: RG_ring_3_56786 00:04:46.752 end memzones------- 00:04:46.752 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.752 heap id: 0 total size: 820.000000 MiB number of busy elements: 304 number of free elements: 18 00:04:46.752 list of free elements. size: 18.450562 MiB 00:04:46.752 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:46.752 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:46.752 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:46.752 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:46.752 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:46.752 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:46.752 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:46.752 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:46.752 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:46.752 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:46.752 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:46.752 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:46.752 element at address: 0x20001b000000 with size: 0.564148 MiB 00:04:46.752 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:46.752 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:46.752 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:46.752 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:46.752 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:46.752 list of standard malloc elements. size: 199.285034 MiB 00:04:46.752 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:46.752 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:46.752 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:46.752 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:46.752 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:46.752 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:46.752 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:46.752 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:46.752 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:46.752 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:46.752 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:46.752 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:46.752 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:46.752 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:46.752 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:46.752 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:46.752 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:46.753 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:46.753 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:46.754 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:46.754 list of memzone associated elements. size: 602.264404 MiB 00:04:46.754 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:46.754 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.754 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:46.754 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.754 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:46.754 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56786_0 00:04:46.754 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:46.754 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56786_0 00:04:46.754 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:46.754 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56786_0 00:04:46.754 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:46.754 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.754 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:46.754 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.754 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:46.754 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56786 00:04:46.754 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:46.754 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56786 00:04:46.754 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:46.754 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56786 00:04:46.754 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:46.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.754 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:46.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.754 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:46.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.754 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:46.754 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.754 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:46.754 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56786 00:04:46.754 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:46.754 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56786 00:04:46.754 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:46.754 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56786 00:04:46.754 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:46.754 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56786 00:04:46.754 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:46.754 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56786 00:04:46.754 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:46.754 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.754 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:46.754 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.754 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:46.754 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.754 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:46.754 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56786 00:04:46.754 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:46.754 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.754 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:46.754 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.754 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:46.754 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56786 00:04:46.754 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:46.754 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.754 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:46.754 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56786 00:04:46.754 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:46.754 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56786 00:04:46.754 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:46.754 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.754 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.754 09:42:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56786 00:04:46.754 09:42:35 -- common/autotest_common.sh@936 -- # '[' -z 56786 ']' 00:04:46.754 09:42:35 -- common/autotest_common.sh@940 -- # kill -0 56786 00:04:46.754 09:42:35 -- common/autotest_common.sh@941 -- # uname 00:04:46.754 09:42:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.754 09:42:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56786 00:04:46.754 09:42:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.754 09:42:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.754 killing process with pid 56786 00:04:46.754 09:42:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56786' 00:04:46.754 09:42:35 -- common/autotest_common.sh@955 -- # kill 56786 00:04:46.754 09:42:35 -- common/autotest_common.sh@960 -- # wait 56786 00:04:48.130 00:04:48.130 real 0m2.323s 00:04:48.130 user 0m2.353s 00:04:48.130 sys 0m0.342s 00:04:48.130 09:42:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.130 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:04:48.130 ************************************ 00:04:48.130 END TEST dpdk_mem_utility 00:04:48.130 ************************************ 00:04:48.130 09:42:36 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.130 09:42:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.130 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:04:48.130 ************************************ 00:04:48.130 START TEST event 00:04:48.130 ************************************ 00:04:48.130 09:42:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.130 * Looking for test storage... 00:04:48.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.130 09:42:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.130 09:42:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.130 09:42:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.130 09:42:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.130 09:42:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.130 09:42:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.130 09:42:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.130 09:42:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.130 09:42:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.130 09:42:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.130 09:42:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.130 09:42:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.130 09:42:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.130 09:42:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.130 09:42:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.130 09:42:36 -- scripts/common.sh@344 -- # : 1 00:04:48.130 09:42:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.130 09:42:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.130 09:42:36 -- scripts/common.sh@364 -- # decimal 1 00:04:48.130 09:42:36 -- scripts/common.sh@352 -- # local d=1 00:04:48.130 09:42:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.130 09:42:36 -- scripts/common.sh@354 -- # echo 1 00:04:48.130 09:42:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.130 09:42:36 -- scripts/common.sh@365 -- # decimal 2 00:04:48.130 09:42:36 -- scripts/common.sh@352 -- # local d=2 00:04:48.130 09:42:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.130 09:42:36 -- scripts/common.sh@354 -- # echo 2 00:04:48.130 09:42:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.130 09:42:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.130 09:42:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.130 09:42:36 -- scripts/common.sh@367 -- # return 0 00:04:48.130 09:42:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.130 --rc genhtml_branch_coverage=1 00:04:48.130 --rc genhtml_function_coverage=1 00:04:48.130 --rc genhtml_legend=1 00:04:48.130 --rc geninfo_all_blocks=1 00:04:48.130 --rc geninfo_unexecuted_blocks=1 00:04:48.130 00:04:48.130 ' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.130 --rc genhtml_branch_coverage=1 00:04:48.130 --rc genhtml_function_coverage=1 00:04:48.130 --rc genhtml_legend=1 00:04:48.130 --rc geninfo_all_blocks=1 00:04:48.130 --rc geninfo_unexecuted_blocks=1 00:04:48.130 00:04:48.130 ' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.130 --rc genhtml_branch_coverage=1 00:04:48.130 --rc genhtml_function_coverage=1 00:04:48.130 --rc genhtml_legend=1 00:04:48.130 --rc geninfo_all_blocks=1 00:04:48.130 --rc geninfo_unexecuted_blocks=1 00:04:48.130 00:04:48.130 ' 00:04:48.130 09:42:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.131 --rc genhtml_branch_coverage=1 00:04:48.131 --rc genhtml_function_coverage=1 00:04:48.131 --rc genhtml_legend=1 00:04:48.131 --rc geninfo_all_blocks=1 00:04:48.131 --rc geninfo_unexecuted_blocks=1 00:04:48.131 00:04:48.131 ' 00:04:48.131 09:42:36 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.131 09:42:36 -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.131 09:42:36 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.131 09:42:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:48.131 09:42:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.131 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:04:48.131 ************************************ 00:04:48.131 START TEST event_perf 00:04:48.131 ************************************ 00:04:48.131 09:42:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.131 Running I/O for 1 seconds...[2024-12-15 09:42:36.992080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.131 [2024-12-15 09:42:36.992179] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56871 ] 00:04:48.131 [2024-12-15 09:42:37.140466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.399 [2024-12-15 09:42:37.298379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.399 [2024-12-15 09:42:37.298423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.399 [2024-12-15 09:42:37.298522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.399 [2024-12-15 09:42:37.298665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.806 Running I/O for 1 seconds... 00:04:49.806 lcore 0: 185171 00:04:49.806 lcore 1: 185174 00:04:49.806 lcore 2: 185175 00:04:49.806 lcore 3: 185176 00:04:49.806 done. 00:04:49.806 00:04:49.806 real 0m1.548s 00:04:49.806 user 0m4.340s 00:04:49.806 sys 0m0.093s 00:04:49.806 09:42:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.806 ************************************ 00:04:49.806 09:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.806 END TEST event_perf 00:04:49.806 ************************************ 00:04:49.806 09:42:38 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.806 09:42:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:49.806 09:42:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.806 09:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.806 ************************************ 00:04:49.806 START TEST event_reactor 00:04:49.806 ************************************ 00:04:49.806 09:42:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.806 [2024-12-15 09:42:38.575886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.806 [2024-12-15 09:42:38.575988] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56916 ] 00:04:49.806 [2024-12-15 09:42:38.721708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.064 [2024-12-15 09:42:38.866776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.440 test_start 00:04:51.440 oneshot 00:04:51.440 tick 100 00:04:51.440 tick 100 00:04:51.440 tick 250 00:04:51.440 tick 100 00:04:51.440 tick 100 00:04:51.440 tick 250 00:04:51.440 tick 100 00:04:51.440 tick 500 00:04:51.440 tick 100 00:04:51.440 tick 100 00:04:51.440 tick 250 00:04:51.440 tick 100 00:04:51.440 tick 100 00:04:51.440 test_end 00:04:51.440 00:04:51.440 real 0m1.530s 00:04:51.440 user 0m1.344s 00:04:51.440 sys 0m0.078s 00:04:51.440 ************************************ 00:04:51.440 END TEST event_reactor 00:04:51.440 ************************************ 00:04:51.440 09:42:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.440 09:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.440 09:42:40 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.440 09:42:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:51.440 09:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.440 09:42:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.440 ************************************ 00:04:51.440 START TEST event_reactor_perf 00:04:51.440 ************************************ 00:04:51.440 09:42:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.440 [2024-12-15 09:42:40.154861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.440 [2024-12-15 09:42:40.154941] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56947 ] 00:04:51.440 [2024-12-15 09:42:40.296030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.440 [2024-12-15 09:42:40.434203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.816 test_start 00:04:52.817 test_end 00:04:52.817 Performance: 407357 events per second 00:04:52.817 00:04:52.817 real 0m1.513s 00:04:52.817 user 0m1.334s 00:04:52.817 sys 0m0.071s 00:04:52.817 09:42:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.817 09:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.817 ************************************ 00:04:52.817 END TEST event_reactor_perf 00:04:52.817 ************************************ 00:04:52.817 09:42:41 -- event/event.sh@49 -- # uname -s 00:04:52.817 09:42:41 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.817 09:42:41 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.817 09:42:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.817 09:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.817 ************************************ 00:04:52.817 START TEST event_scheduler 00:04:52.817 ************************************ 00:04:52.817 09:42:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.817 * Looking for test storage... 00:04:52.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.817 09:42:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.817 09:42:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.817 09:42:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.817 09:42:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.817 09:42:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.817 09:42:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.817 09:42:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.817 09:42:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.817 09:42:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.817 09:42:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.817 09:42:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.817 09:42:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.817 09:42:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.817 09:42:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.817 09:42:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.817 09:42:41 -- scripts/common.sh@344 -- # : 1 00:04:52.817 09:42:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.817 09:42:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.817 09:42:41 -- scripts/common.sh@364 -- # decimal 1 00:04:52.817 09:42:41 -- scripts/common.sh@352 -- # local d=1 00:04:52.817 09:42:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.817 09:42:41 -- scripts/common.sh@354 -- # echo 1 00:04:52.817 09:42:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.817 09:42:41 -- scripts/common.sh@365 -- # decimal 2 00:04:52.817 09:42:41 -- scripts/common.sh@352 -- # local d=2 00:04:52.817 09:42:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.817 09:42:41 -- scripts/common.sh@354 -- # echo 2 00:04:52.817 09:42:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.817 09:42:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.817 09:42:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.817 09:42:41 -- scripts/common.sh@367 -- # return 0 00:04:52.817 09:42:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.817 --rc genhtml_branch_coverage=1 00:04:52.817 --rc genhtml_function_coverage=1 00:04:52.817 --rc genhtml_legend=1 00:04:52.817 --rc geninfo_all_blocks=1 00:04:52.817 --rc geninfo_unexecuted_blocks=1 00:04:52.817 00:04:52.817 ' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.817 --rc genhtml_branch_coverage=1 00:04:52.817 --rc genhtml_function_coverage=1 00:04:52.817 --rc genhtml_legend=1 00:04:52.817 --rc geninfo_all_blocks=1 00:04:52.817 --rc geninfo_unexecuted_blocks=1 00:04:52.817 00:04:52.817 ' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.817 --rc genhtml_branch_coverage=1 00:04:52.817 --rc genhtml_function_coverage=1 00:04:52.817 --rc genhtml_legend=1 00:04:52.817 --rc geninfo_all_blocks=1 00:04:52.817 --rc geninfo_unexecuted_blocks=1 00:04:52.817 00:04:52.817 ' 00:04:52.817 09:42:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.817 --rc genhtml_branch_coverage=1 00:04:52.817 --rc genhtml_function_coverage=1 00:04:52.817 --rc genhtml_legend=1 00:04:52.817 --rc geninfo_all_blocks=1 00:04:52.817 --rc geninfo_unexecuted_blocks=1 00:04:52.817 00:04:52.817 ' 00:04:52.817 09:42:41 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.817 09:42:41 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57022 00:04:52.817 09:42:41 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.817 09:42:41 -- scheduler/scheduler.sh@37 -- # waitforlisten 57022 00:04:52.817 09:42:41 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.817 09:42:41 -- common/autotest_common.sh@829 -- # '[' -z 57022 ']' 00:04:52.817 09:42:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.817 09:42:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.817 09:42:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.817 09:42:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.817 09:42:41 -- common/autotest_common.sh@10 -- # set +x 00:04:53.078 [2024-12-15 09:42:41.886092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.078 [2024-12-15 09:42:41.886208] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57022 ] 00:04:53.078 [2024-12-15 09:42:42.035705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.337 [2024-12-15 09:42:42.224115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.337 [2024-12-15 09:42:42.224771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.337 [2024-12-15 09:42:42.225409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.337 [2024-12-15 09:42:42.225636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.904 09:42:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.904 09:42:42 -- common/autotest_common.sh@862 -- # return 0 00:04:53.904 09:42:42 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.904 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.904 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:53.904 POWER: Env isn't set yet! 00:04:53.904 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:53.904 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.904 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.904 POWER: Attempting to initialise PSTAT power management... 00:04:53.904 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.904 POWER: Cannot set governor of lcore 0 to performance 00:04:53.904 POWER: Attempting to initialise AMD PSTATE power management... 00:04:53.904 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.904 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.904 POWER: Attempting to initialise CPPC power management... 00:04:53.904 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.904 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.904 POWER: Attempting to initialise VM power management... 00:04:53.904 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:53.904 POWER: Unable to set Power Management Environment for lcore 0 00:04:53.904 [2024-12-15 09:42:42.710775] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:53.904 [2024-12-15 09:42:42.710791] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:53.904 [2024-12-15 09:42:42.710800] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:53.904 [2024-12-15 09:42:42.710816] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:53.904 [2024-12-15 09:42:42.710824] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:53.904 [2024-12-15 09:42:42.710832] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:53.904 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.904 09:42:42 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.904 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.904 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.164 [2024-12-15 09:42:42.933890] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.164 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.164 09:42:42 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.164 09:42:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.165 09:42:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 ************************************ 00:04:54.165 START TEST scheduler_create_thread 00:04:54.165 ************************************ 00:04:54.165 09:42:42 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 2 00:04:54.165 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 3 00:04:54.165 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 4 00:04:54.165 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 5 00:04:54.165 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 6 00:04:54.165 09:42:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:42 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.165 09:42:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 7 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 8 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 9 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 10 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.165 09:42:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.165 09:42:43 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.165 09:42:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.165 09:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:55.542 09:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.542 09:42:44 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.542 09:42:44 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.542 09:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.542 09:42:44 -- common/autotest_common.sh@10 -- # set +x 00:04:56.918 09:42:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.918 00:04:56.918 real 0m2.616s 00:04:56.918 user 0m0.011s 00:04:56.918 sys 0m0.008s 00:04:56.918 ************************************ 00:04:56.918 END TEST scheduler_create_thread 00:04:56.918 ************************************ 00:04:56.918 09:42:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.918 09:42:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.918 09:42:45 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.918 09:42:45 -- scheduler/scheduler.sh@46 -- # killprocess 57022 00:04:56.918 09:42:45 -- common/autotest_common.sh@936 -- # '[' -z 57022 ']' 00:04:56.918 09:42:45 -- common/autotest_common.sh@940 -- # kill -0 57022 00:04:56.918 09:42:45 -- common/autotest_common.sh@941 -- # uname 00:04:56.918 09:42:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.918 09:42:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57022 00:04:56.919 09:42:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:56.919 killing process with pid 57022 00:04:56.919 09:42:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:56.919 09:42:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57022' 00:04:56.919 09:42:45 -- common/autotest_common.sh@955 -- # kill 57022 00:04:56.919 09:42:45 -- common/autotest_common.sh@960 -- # wait 57022 00:04:57.176 [2024-12-15 09:42:46.048393] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.740 00:04:57.740 real 0m5.062s 00:04:57.740 user 0m8.514s 00:04:57.740 sys 0m0.361s 00:04:57.740 09:42:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.740 ************************************ 00:04:57.740 END TEST event_scheduler 00:04:57.740 ************************************ 00:04:57.740 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 09:42:46 -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.997 09:42:46 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.997 09:42:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.997 09:42:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.997 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 ************************************ 00:04:57.997 START TEST app_repeat 00:04:57.997 ************************************ 00:04:57.997 09:42:46 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:57.997 09:42:46 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.997 09:42:46 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.997 09:42:46 -- event/event.sh@13 -- # local nbd_list 00:04:57.997 09:42:46 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.997 09:42:46 -- event/event.sh@14 -- # local bdev_list 00:04:57.997 09:42:46 -- event/event.sh@15 -- # local repeat_times=4 00:04:57.997 09:42:46 -- event/event.sh@17 -- # modprobe nbd 00:04:57.997 09:42:46 -- event/event.sh@19 -- # repeat_pid=57128 00:04:57.997 09:42:46 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.997 Process app_repeat pid: 57128 00:04:57.997 09:42:46 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57128' 00:04:57.997 09:42:46 -- event/event.sh@23 -- # for i in {0..2} 00:04:57.997 09:42:46 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.997 spdk_app_start Round 0 00:04:57.997 09:42:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.997 09:42:46 -- event/event.sh@25 -- # waitforlisten 57128 /var/tmp/spdk-nbd.sock 00:04:57.997 09:42:46 -- common/autotest_common.sh@829 -- # '[' -z 57128 ']' 00:04:57.997 09:42:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.997 09:42:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.997 09:42:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.997 09:42:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.997 09:42:46 -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 [2024-12-15 09:42:46.821887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.998 [2024-12-15 09:42:46.821966] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57128 ] 00:04:57.998 [2024-12-15 09:42:46.961417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.255 [2024-12-15 09:42:47.113250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.255 [2024-12-15 09:42:47.113300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.821 09:42:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.821 09:42:47 -- common/autotest_common.sh@862 -- # return 0 00:04:58.821 09:42:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.079 Malloc0 00:04:59.079 09:42:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.079 Malloc1 00:04:59.338 09:42:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@12 -- # local i 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.338 /dev/nbd0 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.338 09:42:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.338 09:42:48 -- common/autotest_common.sh@867 -- # local i 00:04:59.338 09:42:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.338 09:42:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.338 09:42:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.338 09:42:48 -- common/autotest_common.sh@871 -- # break 00:04:59.338 09:42:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.338 09:42:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.338 09:42:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.338 1+0 records in 00:04:59.338 1+0 records out 00:04:59.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026397 s, 15.5 MB/s 00:04:59.338 09:42:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.338 09:42:48 -- common/autotest_common.sh@884 -- # size=4096 00:04:59.338 09:42:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.338 09:42:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.338 09:42:48 -- common/autotest_common.sh@887 -- # return 0 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.338 09:42:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.596 /dev/nbd1 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.596 09:42:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.596 09:42:48 -- common/autotest_common.sh@867 -- # local i 00:04:59.596 09:42:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.596 09:42:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.596 09:42:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.596 09:42:48 -- common/autotest_common.sh@871 -- # break 00:04:59.596 09:42:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.596 09:42:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.596 09:42:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.596 1+0 records in 00:04:59.596 1+0 records out 00:04:59.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002637 s, 15.5 MB/s 00:04:59.596 09:42:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.596 09:42:48 -- common/autotest_common.sh@884 -- # size=4096 00:04:59.596 09:42:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.596 09:42:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.596 09:42:48 -- common/autotest_common.sh@887 -- # return 0 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.596 09:42:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.854 { 00:04:59.854 "nbd_device": "/dev/nbd0", 00:04:59.854 "bdev_name": "Malloc0" 00:04:59.854 }, 00:04:59.854 { 00:04:59.854 "nbd_device": "/dev/nbd1", 00:04:59.854 "bdev_name": "Malloc1" 00:04:59.854 } 00:04:59.854 ]' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.854 { 00:04:59.854 "nbd_device": "/dev/nbd0", 00:04:59.854 "bdev_name": "Malloc0" 00:04:59.854 }, 00:04:59.854 { 00:04:59.854 "nbd_device": "/dev/nbd1", 00:04:59.854 "bdev_name": "Malloc1" 00:04:59.854 } 00:04:59.854 ]' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.854 /dev/nbd1' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.854 /dev/nbd1' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.854 256+0 records in 00:04:59.854 256+0 records out 00:04:59.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00756662 s, 139 MB/s 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.854 256+0 records in 00:04:59.854 256+0 records out 00:04:59.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206389 s, 50.8 MB/s 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.854 256+0 records in 00:04:59.854 256+0 records out 00:04:59.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019138 s, 54.8 MB/s 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@51 -- # local i 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.854 09:42:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@41 -- # break 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.113 09:42:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@41 -- # break 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.370 09:42:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@65 -- # true 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.628 09:42:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.628 09:42:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.885 09:42:49 -- event/event.sh@35 -- # sleep 3 00:05:01.451 [2024-12-15 09:42:50.330001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.451 [2024-12-15 09:42:50.459503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.451 [2024-12-15 09:42:50.459578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.708 [2024-12-15 09:42:50.565101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.708 [2024-12-15 09:42:50.565141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.236 09:42:52 -- event/event.sh@23 -- # for i in {0..2} 00:05:04.236 spdk_app_start Round 1 00:05:04.236 09:42:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.236 09:42:52 -- event/event.sh@25 -- # waitforlisten 57128 /var/tmp/spdk-nbd.sock 00:05:04.236 09:42:52 -- common/autotest_common.sh@829 -- # '[' -z 57128 ']' 00:05:04.236 09:42:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.236 09:42:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.236 09:42:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.236 09:42:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.236 09:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:04.236 09:42:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.236 09:42:52 -- common/autotest_common.sh@862 -- # return 0 00:05:04.236 09:42:52 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.236 Malloc0 00:05:04.236 09:42:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.494 Malloc1 00:05:04.494 09:42:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@12 -- # local i 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.494 09:42:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.753 /dev/nbd0 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.753 09:42:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:04.753 09:42:53 -- common/autotest_common.sh@867 -- # local i 00:05:04.753 09:42:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:04.753 09:42:53 -- common/autotest_common.sh@871 -- # break 00:05:04.753 09:42:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.753 1+0 records in 00:05:04.753 1+0 records out 00:05:04.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015327 s, 26.7 MB/s 00:05:04.753 09:42:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.753 09:42:53 -- common/autotest_common.sh@884 -- # size=4096 00:05:04.753 09:42:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.753 09:42:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:04.753 09:42:53 -- common/autotest_common.sh@887 -- # return 0 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.753 /dev/nbd1 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.753 09:42:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.753 09:42:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:04.753 09:42:53 -- common/autotest_common.sh@867 -- # local i 00:05:04.753 09:42:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:04.753 09:42:53 -- common/autotest_common.sh@871 -- # break 00:05:04.753 09:42:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:04.753 09:42:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.020 1+0 records in 00:05:05.020 1+0 records out 00:05:05.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260408 s, 15.7 MB/s 00:05:05.020 09:42:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.020 09:42:53 -- common/autotest_common.sh@884 -- # size=4096 00:05:05.020 09:42:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.020 09:42:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.020 09:42:53 -- common/autotest_common.sh@887 -- # return 0 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.020 { 00:05:05.020 "nbd_device": "/dev/nbd0", 00:05:05.020 "bdev_name": "Malloc0" 00:05:05.020 }, 00:05:05.020 { 00:05:05.020 "nbd_device": "/dev/nbd1", 00:05:05.020 "bdev_name": "Malloc1" 00:05:05.020 } 00:05:05.020 ]' 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.020 { 00:05:05.020 "nbd_device": "/dev/nbd0", 00:05:05.020 "bdev_name": "Malloc0" 00:05:05.020 }, 00:05:05.020 { 00:05:05.020 "nbd_device": "/dev/nbd1", 00:05:05.020 "bdev_name": "Malloc1" 00:05:05.020 } 00:05:05.020 ]' 00:05:05.020 09:42:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.020 /dev/nbd1' 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.020 /dev/nbd1' 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.020 256+0 records in 00:05:05.020 256+0 records out 00:05:05.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646535 s, 162 MB/s 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.020 09:42:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.278 256+0 records in 00:05:05.278 256+0 records out 00:05:05.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178241 s, 58.8 MB/s 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.278 256+0 records in 00:05:05.278 256+0 records out 00:05:05.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150401 s, 69.7 MB/s 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@51 -- # local i 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@41 -- # break 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.278 09:42:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@41 -- # break 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.536 09:42:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@65 -- # true 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.794 09:42:54 -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.794 09:42:54 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.052 09:42:54 -- event/event.sh@35 -- # sleep 3 00:05:06.616 [2024-12-15 09:42:55.605060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.874 [2024-12-15 09:42:55.737277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.874 [2024-12-15 09:42:55.737307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.874 [2024-12-15 09:42:55.841149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.874 [2024-12-15 09:42:55.841205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.401 09:42:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.401 spdk_app_start Round 2 00:05:09.401 09:42:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.401 09:42:57 -- event/event.sh@25 -- # waitforlisten 57128 /var/tmp/spdk-nbd.sock 00:05:09.401 09:42:57 -- common/autotest_common.sh@829 -- # '[' -z 57128 ']' 00:05:09.401 09:42:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.401 09:42:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.401 09:42:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.401 09:42:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.401 09:42:57 -- common/autotest_common.sh@10 -- # set +x 00:05:09.401 09:42:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.401 09:42:58 -- common/autotest_common.sh@862 -- # return 0 00:05:09.401 09:42:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.401 Malloc0 00:05:09.401 09:42:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.659 Malloc1 00:05:09.659 09:42:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.659 09:42:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.660 09:42:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.917 /dev/nbd0 00:05:09.917 09:42:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.917 09:42:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.917 09:42:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:09.917 09:42:58 -- common/autotest_common.sh@867 -- # local i 00:05:09.917 09:42:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.917 09:42:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.917 09:42:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:09.917 09:42:58 -- common/autotest_common.sh@871 -- # break 00:05:09.917 09:42:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.917 09:42:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.917 09:42:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.917 1+0 records in 00:05:09.917 1+0 records out 00:05:09.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313241 s, 13.1 MB/s 00:05:09.917 09:42:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.917 09:42:58 -- common/autotest_common.sh@884 -- # size=4096 00:05:09.917 09:42:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.917 09:42:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.917 09:42:58 -- common/autotest_common.sh@887 -- # return 0 00:05:09.917 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.917 09:42:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.917 09:42:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.175 /dev/nbd1 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.175 09:42:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:10.175 09:42:59 -- common/autotest_common.sh@867 -- # local i 00:05:10.175 09:42:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.175 09:42:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.175 09:42:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:10.175 09:42:59 -- common/autotest_common.sh@871 -- # break 00:05:10.175 09:42:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.175 09:42:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.175 09:42:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.175 1+0 records in 00:05:10.175 1+0 records out 00:05:10.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179728 s, 22.8 MB/s 00:05:10.175 09:42:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.175 09:42:59 -- common/autotest_common.sh@884 -- # size=4096 00:05:10.175 09:42:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.175 09:42:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.175 09:42:59 -- common/autotest_common.sh@887 -- # return 0 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.175 09:42:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.434 { 00:05:10.434 "nbd_device": "/dev/nbd0", 00:05:10.434 "bdev_name": "Malloc0" 00:05:10.434 }, 00:05:10.434 { 00:05:10.434 "nbd_device": "/dev/nbd1", 00:05:10.434 "bdev_name": "Malloc1" 00:05:10.434 } 00:05:10.434 ]' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.434 { 00:05:10.434 "nbd_device": "/dev/nbd0", 00:05:10.434 "bdev_name": "Malloc0" 00:05:10.434 }, 00:05:10.434 { 00:05:10.434 "nbd_device": "/dev/nbd1", 00:05:10.434 "bdev_name": "Malloc1" 00:05:10.434 } 00:05:10.434 ]' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.434 /dev/nbd1' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.434 /dev/nbd1' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.434 256+0 records in 00:05:10.434 256+0 records out 00:05:10.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758208 s, 138 MB/s 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.434 256+0 records in 00:05:10.434 256+0 records out 00:05:10.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148583 s, 70.6 MB/s 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.434 256+0 records in 00:05:10.434 256+0 records out 00:05:10.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183691 s, 57.1 MB/s 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@51 -- # local i 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.434 09:42:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.692 09:42:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@41 -- # break 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.693 09:42:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@41 -- # break 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@65 -- # true 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.951 09:42:59 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.951 09:42:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.209 09:43:00 -- event/event.sh@35 -- # sleep 3 00:05:12.144 [2024-12-15 09:43:00.827177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.144 [2024-12-15 09:43:00.956913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.144 [2024-12-15 09:43:00.956919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.144 [2024-12-15 09:43:01.060624] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.144 [2024-12-15 09:43:01.060669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.674 09:43:03 -- event/event.sh@38 -- # waitforlisten 57128 /var/tmp/spdk-nbd.sock 00:05:14.674 09:43:03 -- common/autotest_common.sh@829 -- # '[' -z 57128 ']' 00:05:14.674 09:43:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.674 09:43:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.674 09:43:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.674 09:43:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.674 09:43:03 -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 09:43:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.674 09:43:03 -- common/autotest_common.sh@862 -- # return 0 00:05:14.674 09:43:03 -- event/event.sh@39 -- # killprocess 57128 00:05:14.674 09:43:03 -- common/autotest_common.sh@936 -- # '[' -z 57128 ']' 00:05:14.674 09:43:03 -- common/autotest_common.sh@940 -- # kill -0 57128 00:05:14.674 09:43:03 -- common/autotest_common.sh@941 -- # uname 00:05:14.674 09:43:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.674 09:43:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57128 00:05:14.674 09:43:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.674 09:43:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.674 09:43:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57128' 00:05:14.674 killing process with pid 57128 00:05:14.674 09:43:03 -- common/autotest_common.sh@955 -- # kill 57128 00:05:14.675 09:43:03 -- common/autotest_common.sh@960 -- # wait 57128 00:05:15.242 spdk_app_start is called in Round 0. 00:05:15.242 Shutdown signal received, stop current app iteration 00:05:15.242 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:15.242 spdk_app_start is called in Round 1. 00:05:15.242 Shutdown signal received, stop current app iteration 00:05:15.242 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:15.242 spdk_app_start is called in Round 2. 00:05:15.242 Shutdown signal received, stop current app iteration 00:05:15.242 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:15.242 spdk_app_start is called in Round 3. 00:05:15.242 Shutdown signal received, stop current app iteration 00:05:15.242 09:43:04 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.242 09:43:04 -- event/event.sh@42 -- # return 0 00:05:15.242 00:05:15.242 real 0m17.269s 00:05:15.242 user 0m37.143s 00:05:15.242 sys 0m1.887s 00:05:15.242 09:43:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.242 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.242 ************************************ 00:05:15.242 END TEST app_repeat 00:05:15.242 ************************************ 00:05:15.242 09:43:04 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.242 09:43:04 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:15.242 09:43:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.242 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.242 ************************************ 00:05:15.242 START TEST cpu_locks 00:05:15.242 ************************************ 00:05:15.242 09:43:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:15.242 * Looking for test storage... 00:05:15.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.242 09:43:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.242 09:43:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.242 09:43:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.242 09:43:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.242 09:43:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.242 09:43:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.242 09:43:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.242 09:43:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.242 09:43:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.242 09:43:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.242 09:43:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.242 09:43:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.242 09:43:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.242 09:43:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.242 09:43:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.242 09:43:04 -- scripts/common.sh@344 -- # : 1 00:05:15.242 09:43:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.242 09:43:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.242 09:43:04 -- scripts/common.sh@364 -- # decimal 1 00:05:15.242 09:43:04 -- scripts/common.sh@352 -- # local d=1 00:05:15.242 09:43:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.242 09:43:04 -- scripts/common.sh@354 -- # echo 1 00:05:15.242 09:43:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.242 09:43:04 -- scripts/common.sh@365 -- # decimal 2 00:05:15.242 09:43:04 -- scripts/common.sh@352 -- # local d=2 00:05:15.242 09:43:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.242 09:43:04 -- scripts/common.sh@354 -- # echo 2 00:05:15.242 09:43:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.242 09:43:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.242 09:43:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.242 09:43:04 -- scripts/common.sh@367 -- # return 0 00:05:15.242 09:43:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.242 --rc genhtml_branch_coverage=1 00:05:15.242 --rc genhtml_function_coverage=1 00:05:15.242 --rc genhtml_legend=1 00:05:15.242 --rc geninfo_all_blocks=1 00:05:15.242 --rc geninfo_unexecuted_blocks=1 00:05:15.242 00:05:15.242 ' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.242 --rc genhtml_branch_coverage=1 00:05:15.242 --rc genhtml_function_coverage=1 00:05:15.242 --rc genhtml_legend=1 00:05:15.242 --rc geninfo_all_blocks=1 00:05:15.242 --rc geninfo_unexecuted_blocks=1 00:05:15.242 00:05:15.242 ' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.242 --rc genhtml_branch_coverage=1 00:05:15.242 --rc genhtml_function_coverage=1 00:05:15.242 --rc genhtml_legend=1 00:05:15.242 --rc geninfo_all_blocks=1 00:05:15.242 --rc geninfo_unexecuted_blocks=1 00:05:15.242 00:05:15.242 ' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.242 --rc genhtml_branch_coverage=1 00:05:15.242 --rc genhtml_function_coverage=1 00:05:15.242 --rc genhtml_legend=1 00:05:15.242 --rc geninfo_all_blocks=1 00:05:15.242 --rc geninfo_unexecuted_blocks=1 00:05:15.242 00:05:15.242 ' 00:05:15.242 09:43:04 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.242 09:43:04 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.242 09:43:04 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.242 09:43:04 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.242 09:43:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.242 09:43:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.242 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.242 ************************************ 00:05:15.242 START TEST default_locks 00:05:15.242 ************************************ 00:05:15.242 09:43:04 -- common/autotest_common.sh@1114 -- # default_locks 00:05:15.243 09:43:04 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57552 00:05:15.243 09:43:04 -- event/cpu_locks.sh@47 -- # waitforlisten 57552 00:05:15.243 09:43:04 -- common/autotest_common.sh@829 -- # '[' -z 57552 ']' 00:05:15.243 09:43:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.243 09:43:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.243 09:43:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.243 09:43:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.243 09:43:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.243 09:43:04 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.501 [2024-12-15 09:43:04.324926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.502 [2024-12-15 09:43:04.325040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57552 ] 00:05:15.502 [2024-12-15 09:43:04.474580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.760 [2024-12-15 09:43:04.645180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.760 [2024-12-15 09:43:04.645408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.134 09:43:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.134 09:43:05 -- common/autotest_common.sh@862 -- # return 0 00:05:17.134 09:43:05 -- event/cpu_locks.sh@49 -- # locks_exist 57552 00:05:17.134 09:43:05 -- event/cpu_locks.sh@22 -- # lslocks -p 57552 00:05:17.134 09:43:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.134 09:43:05 -- event/cpu_locks.sh@50 -- # killprocess 57552 00:05:17.134 09:43:05 -- common/autotest_common.sh@936 -- # '[' -z 57552 ']' 00:05:17.134 09:43:05 -- common/autotest_common.sh@940 -- # kill -0 57552 00:05:17.134 09:43:05 -- common/autotest_common.sh@941 -- # uname 00:05:17.134 09:43:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.134 09:43:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57552 00:05:17.134 09:43:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.134 killing process with pid 57552 00:05:17.134 09:43:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.134 09:43:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57552' 00:05:17.134 09:43:06 -- common/autotest_common.sh@955 -- # kill 57552 00:05:17.134 09:43:06 -- common/autotest_common.sh@960 -- # wait 57552 00:05:18.508 09:43:07 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57552 00:05:18.508 09:43:07 -- common/autotest_common.sh@650 -- # local es=0 00:05:18.508 09:43:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57552 00:05:18.508 09:43:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:18.508 09:43:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.508 09:43:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:18.508 09:43:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.508 09:43:07 -- common/autotest_common.sh@653 -- # waitforlisten 57552 00:05:18.508 09:43:07 -- common/autotest_common.sh@829 -- # '[' -z 57552 ']' 00:05:18.508 09:43:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.508 09:43:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.508 09:43:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.508 09:43:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.508 09:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57552) - No such process 00:05:18.509 ERROR: process (pid: 57552) is no longer running 00:05:18.509 09:43:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.509 09:43:07 -- common/autotest_common.sh@862 -- # return 1 00:05:18.509 09:43:07 -- common/autotest_common.sh@653 -- # es=1 00:05:18.509 09:43:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.509 09:43:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.509 09:43:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.509 09:43:07 -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.509 09:43:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.509 09:43:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.509 09:43:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.509 00:05:18.509 real 0m3.041s 00:05:18.509 user 0m3.198s 00:05:18.509 sys 0m0.439s 00:05:18.509 09:43:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.509 ************************************ 00:05:18.509 END TEST default_locks 00:05:18.509 09:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.509 ************************************ 00:05:18.509 09:43:07 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.509 09:43:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.509 09:43:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.509 09:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.509 ************************************ 00:05:18.509 START TEST default_locks_via_rpc 00:05:18.509 ************************************ 00:05:18.509 09:43:07 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:18.509 09:43:07 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57617 00:05:18.509 09:43:07 -- event/cpu_locks.sh@63 -- # waitforlisten 57617 00:05:18.509 09:43:07 -- common/autotest_common.sh@829 -- # '[' -z 57617 ']' 00:05:18.509 09:43:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.509 09:43:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.509 09:43:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.509 09:43:07 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.509 09:43:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.509 09:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.509 [2024-12-15 09:43:07.387802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.509 [2024-12-15 09:43:07.387891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:05:18.766 [2024-12-15 09:43:07.529062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.766 [2024-12-15 09:43:07.703768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.766 [2024-12-15 09:43:07.703975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.140 09:43:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.140 09:43:08 -- common/autotest_common.sh@862 -- # return 0 00:05:20.140 09:43:08 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.140 09:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.140 09:43:08 -- common/autotest_common.sh@10 -- # set +x 00:05:20.140 09:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.140 09:43:08 -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.140 09:43:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.140 09:43:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.140 09:43:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.140 09:43:08 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.140 09:43:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.140 09:43:08 -- common/autotest_common.sh@10 -- # set +x 00:05:20.140 09:43:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.140 09:43:08 -- event/cpu_locks.sh@71 -- # locks_exist 57617 00:05:20.140 09:43:08 -- event/cpu_locks.sh@22 -- # lslocks -p 57617 00:05:20.140 09:43:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.140 09:43:09 -- event/cpu_locks.sh@73 -- # killprocess 57617 00:05:20.140 09:43:09 -- common/autotest_common.sh@936 -- # '[' -z 57617 ']' 00:05:20.140 09:43:09 -- common/autotest_common.sh@940 -- # kill -0 57617 00:05:20.140 09:43:09 -- common/autotest_common.sh@941 -- # uname 00:05:20.140 09:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.140 09:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57617 00:05:20.140 09:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.140 09:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.140 killing process with pid 57617 00:05:20.140 09:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57617' 00:05:20.140 09:43:09 -- common/autotest_common.sh@955 -- # kill 57617 00:05:20.140 09:43:09 -- common/autotest_common.sh@960 -- # wait 57617 00:05:22.104 00:05:22.104 real 0m3.264s 00:05:22.104 user 0m3.405s 00:05:22.104 sys 0m0.408s 00:05:22.104 09:43:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.104 ************************************ 00:05:22.104 END TEST default_locks_via_rpc 00:05:22.104 ************************************ 00:05:22.104 09:43:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.104 09:43:10 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.104 09:43:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.104 09:43:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.104 09:43:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.104 ************************************ 00:05:22.104 START TEST non_locking_app_on_locked_coremask 00:05:22.104 ************************************ 00:05:22.104 09:43:10 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:22.104 09:43:10 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57683 00:05:22.104 09:43:10 -- event/cpu_locks.sh@81 -- # waitforlisten 57683 /var/tmp/spdk.sock 00:05:22.104 09:43:10 -- common/autotest_common.sh@829 -- # '[' -z 57683 ']' 00:05:22.104 09:43:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.104 09:43:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.104 09:43:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.104 09:43:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.104 09:43:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.104 09:43:10 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.104 [2024-12-15 09:43:10.700871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.104 [2024-12-15 09:43:10.700991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57683 ] 00:05:22.104 [2024-12-15 09:43:10.850673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.104 [2024-12-15 09:43:11.031393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.104 [2024-12-15 09:43:11.031596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.492 09:43:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.492 09:43:12 -- common/autotest_common.sh@862 -- # return 0 00:05:23.492 09:43:12 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.493 09:43:12 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57707 00:05:23.493 09:43:12 -- event/cpu_locks.sh@85 -- # waitforlisten 57707 /var/tmp/spdk2.sock 00:05:23.493 09:43:12 -- common/autotest_common.sh@829 -- # '[' -z 57707 ']' 00:05:23.493 09:43:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.493 09:43:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.493 09:43:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.493 09:43:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.493 09:43:12 -- common/autotest_common.sh@10 -- # set +x 00:05:23.493 [2024-12-15 09:43:12.291222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.493 [2024-12-15 09:43:12.291394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57707 ] 00:05:23.493 [2024-12-15 09:43:12.463463] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.493 [2024-12-15 09:43:12.463508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.059 [2024-12-15 09:43:12.774686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.059 [2024-12-15 09:43:12.774838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.992 09:43:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.992 09:43:13 -- common/autotest_common.sh@862 -- # return 0 00:05:24.992 09:43:13 -- event/cpu_locks.sh@87 -- # locks_exist 57683 00:05:24.992 09:43:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.992 09:43:13 -- event/cpu_locks.sh@22 -- # lslocks -p 57683 00:05:25.249 09:43:14 -- event/cpu_locks.sh@89 -- # killprocess 57683 00:05:25.249 09:43:14 -- common/autotest_common.sh@936 -- # '[' -z 57683 ']' 00:05:25.249 09:43:14 -- common/autotest_common.sh@940 -- # kill -0 57683 00:05:25.249 09:43:14 -- common/autotest_common.sh@941 -- # uname 00:05:25.249 09:43:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.249 09:43:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57683 00:05:25.249 09:43:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.249 09:43:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.249 killing process with pid 57683 00:05:25.249 09:43:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57683' 00:05:25.249 09:43:14 -- common/autotest_common.sh@955 -- # kill 57683 00:05:25.249 09:43:14 -- common/autotest_common.sh@960 -- # wait 57683 00:05:27.777 09:43:16 -- event/cpu_locks.sh@90 -- # killprocess 57707 00:05:27.777 09:43:16 -- common/autotest_common.sh@936 -- # '[' -z 57707 ']' 00:05:27.777 09:43:16 -- common/autotest_common.sh@940 -- # kill -0 57707 00:05:27.777 09:43:16 -- common/autotest_common.sh@941 -- # uname 00:05:27.777 09:43:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.777 09:43:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57707 00:05:27.777 09:43:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.777 09:43:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.777 killing process with pid 57707 00:05:27.777 09:43:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57707' 00:05:27.777 09:43:16 -- common/autotest_common.sh@955 -- # kill 57707 00:05:27.777 09:43:16 -- common/autotest_common.sh@960 -- # wait 57707 00:05:29.152 00:05:29.152 real 0m7.110s 00:05:29.152 user 0m7.683s 00:05:29.152 sys 0m0.887s 00:05:29.152 09:43:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.152 ************************************ 00:05:29.152 END TEST non_locking_app_on_locked_coremask 00:05:29.152 ************************************ 00:05:29.152 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.152 09:43:17 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.152 09:43:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.152 09:43:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.152 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.152 ************************************ 00:05:29.152 START TEST locking_app_on_unlocked_coremask 00:05:29.152 ************************************ 00:05:29.152 09:43:17 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:29.152 09:43:17 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57805 00:05:29.152 09:43:17 -- event/cpu_locks.sh@99 -- # waitforlisten 57805 /var/tmp/spdk.sock 00:05:29.152 09:43:17 -- common/autotest_common.sh@829 -- # '[' -z 57805 ']' 00:05:29.152 09:43:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.152 09:43:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.152 09:43:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.152 09:43:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.152 09:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.152 09:43:17 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.152 [2024-12-15 09:43:17.850697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.152 [2024-12-15 09:43:17.851180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57805 ] 00:05:29.152 [2024-12-15 09:43:17.990813] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.152 [2024-12-15 09:43:17.990855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.152 [2024-12-15 09:43:18.134043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.152 [2024-12-15 09:43:18.134208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.719 09:43:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.719 09:43:18 -- common/autotest_common.sh@862 -- # return 0 00:05:29.719 09:43:18 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57816 00:05:29.719 09:43:18 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.719 09:43:18 -- event/cpu_locks.sh@103 -- # waitforlisten 57816 /var/tmp/spdk2.sock 00:05:29.719 09:43:18 -- common/autotest_common.sh@829 -- # '[' -z 57816 ']' 00:05:29.719 09:43:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.719 09:43:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.719 09:43:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.719 09:43:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.719 09:43:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.719 [2024-12-15 09:43:18.682780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.719 [2024-12-15 09:43:18.682866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57816 ] 00:05:29.977 [2024-12-15 09:43:18.823582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.234 [2024-12-15 09:43:19.111092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.234 [2024-12-15 09:43:19.111242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.612 09:43:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.612 09:43:20 -- common/autotest_common.sh@862 -- # return 0 00:05:31.612 09:43:20 -- event/cpu_locks.sh@105 -- # locks_exist 57816 00:05:31.612 09:43:20 -- event/cpu_locks.sh@22 -- # lslocks -p 57816 00:05:31.612 09:43:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.612 09:43:20 -- event/cpu_locks.sh@107 -- # killprocess 57805 00:05:31.612 09:43:20 -- common/autotest_common.sh@936 -- # '[' -z 57805 ']' 00:05:31.612 09:43:20 -- common/autotest_common.sh@940 -- # kill -0 57805 00:05:31.612 09:43:20 -- common/autotest_common.sh@941 -- # uname 00:05:31.612 09:43:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.612 09:43:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57805 00:05:31.612 09:43:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.612 09:43:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.612 killing process with pid 57805 00:05:31.612 09:43:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57805' 00:05:31.612 09:43:20 -- common/autotest_common.sh@955 -- # kill 57805 00:05:31.612 09:43:20 -- common/autotest_common.sh@960 -- # wait 57805 00:05:34.142 09:43:22 -- event/cpu_locks.sh@108 -- # killprocess 57816 00:05:34.142 09:43:22 -- common/autotest_common.sh@936 -- # '[' -z 57816 ']' 00:05:34.142 09:43:22 -- common/autotest_common.sh@940 -- # kill -0 57816 00:05:34.142 09:43:22 -- common/autotest_common.sh@941 -- # uname 00:05:34.142 09:43:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.142 09:43:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57816 00:05:34.142 killing process with pid 57816 00:05:34.142 09:43:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.142 09:43:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.142 09:43:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57816' 00:05:34.142 09:43:22 -- common/autotest_common.sh@955 -- # kill 57816 00:05:34.142 09:43:22 -- common/autotest_common.sh@960 -- # wait 57816 00:05:35.517 00:05:35.517 real 0m6.404s 00:05:35.517 user 0m6.752s 00:05:35.517 sys 0m0.805s 00:05:35.517 09:43:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.517 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.517 ************************************ 00:05:35.517 END TEST locking_app_on_unlocked_coremask 00:05:35.517 ************************************ 00:05:35.517 09:43:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.517 09:43:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.517 09:43:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.517 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.517 ************************************ 00:05:35.517 START TEST locking_app_on_locked_coremask 00:05:35.517 ************************************ 00:05:35.517 09:43:24 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.517 09:43:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57920 00:05:35.517 09:43:24 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.517 09:43:24 -- event/cpu_locks.sh@116 -- # waitforlisten 57920 /var/tmp/spdk.sock 00:05:35.517 09:43:24 -- common/autotest_common.sh@829 -- # '[' -z 57920 ']' 00:05:35.517 09:43:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.517 09:43:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.517 09:43:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.517 09:43:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.517 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.517 [2024-12-15 09:43:24.293080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.517 [2024-12-15 09:43:24.293189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:05:35.517 [2024-12-15 09:43:24.442010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.776 [2024-12-15 09:43:24.614968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.776 [2024-12-15 09:43:24.615173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.153 09:43:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.153 09:43:25 -- common/autotest_common.sh@862 -- # return 0 00:05:37.153 09:43:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.153 09:43:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57938 00:05:37.153 09:43:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57938 /var/tmp/spdk2.sock 00:05:37.153 09:43:25 -- common/autotest_common.sh@650 -- # local es=0 00:05:37.153 09:43:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57938 /var/tmp/spdk2.sock 00:05:37.153 09:43:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.153 09:43:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.153 09:43:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.153 09:43:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.153 09:43:25 -- common/autotest_common.sh@653 -- # waitforlisten 57938 /var/tmp/spdk2.sock 00:05:37.153 09:43:25 -- common/autotest_common.sh@829 -- # '[' -z 57938 ']' 00:05:37.153 09:43:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.153 09:43:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.153 09:43:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.153 09:43:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.153 09:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.153 [2024-12-15 09:43:25.848658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.153 [2024-12-15 09:43:25.848767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:05:37.153 [2024-12-15 09:43:26.001385] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57920 has claimed it. 00:05:37.153 [2024-12-15 09:43:26.001443] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.719 ERROR: process (pid: 57938) is no longer running 00:05:37.719 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57938) - No such process 00:05:37.719 09:43:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.719 09:43:26 -- common/autotest_common.sh@862 -- # return 1 00:05:37.719 09:43:26 -- common/autotest_common.sh@653 -- # es=1 00:05:37.719 09:43:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.719 09:43:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.719 09:43:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.719 09:43:26 -- event/cpu_locks.sh@122 -- # locks_exist 57920 00:05:37.719 09:43:26 -- event/cpu_locks.sh@22 -- # lslocks -p 57920 00:05:37.719 09:43:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.719 09:43:26 -- event/cpu_locks.sh@124 -- # killprocess 57920 00:05:37.719 09:43:26 -- common/autotest_common.sh@936 -- # '[' -z 57920 ']' 00:05:37.719 09:43:26 -- common/autotest_common.sh@940 -- # kill -0 57920 00:05:37.719 09:43:26 -- common/autotest_common.sh@941 -- # uname 00:05:37.719 09:43:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.719 09:43:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57920 00:05:37.719 killing process with pid 57920 00:05:37.719 09:43:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.719 09:43:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.719 09:43:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57920' 00:05:37.719 09:43:26 -- common/autotest_common.sh@955 -- # kill 57920 00:05:37.719 09:43:26 -- common/autotest_common.sh@960 -- # wait 57920 00:05:39.092 00:05:39.092 real 0m3.615s 00:05:39.092 user 0m3.969s 00:05:39.092 sys 0m0.501s 00:05:39.092 09:43:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.092 ************************************ 00:05:39.092 END TEST locking_app_on_locked_coremask 00:05:39.092 ************************************ 00:05:39.092 09:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.092 09:43:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.092 09:43:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.092 09:43:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.092 09:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.092 ************************************ 00:05:39.092 START TEST locking_overlapped_coremask 00:05:39.092 ************************************ 00:05:39.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.092 09:43:27 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:39.092 09:43:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57991 00:05:39.092 09:43:27 -- event/cpu_locks.sh@133 -- # waitforlisten 57991 /var/tmp/spdk.sock 00:05:39.092 09:43:27 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.092 09:43:27 -- common/autotest_common.sh@829 -- # '[' -z 57991 ']' 00:05:39.092 09:43:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.092 09:43:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.092 09:43:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.092 09:43:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.092 09:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.092 [2024-12-15 09:43:27.977088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.092 [2024-12-15 09:43:27.977201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57991 ] 00:05:39.350 [2024-12-15 09:43:28.125090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.350 [2024-12-15 09:43:28.270830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.350 [2024-12-15 09:43:28.271429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.350 [2024-12-15 09:43:28.271634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.350 [2024-12-15 09:43:28.271637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.938 09:43:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.939 09:43:28 -- common/autotest_common.sh@862 -- # return 0 00:05:39.939 09:43:28 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.939 09:43:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58009 00:05:39.939 09:43:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58009 /var/tmp/spdk2.sock 00:05:39.939 09:43:28 -- common/autotest_common.sh@650 -- # local es=0 00:05:39.939 09:43:28 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58009 /var/tmp/spdk2.sock 00:05:39.939 09:43:28 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:39.939 09:43:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.939 09:43:28 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:39.939 09:43:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.939 09:43:28 -- common/autotest_common.sh@653 -- # waitforlisten 58009 /var/tmp/spdk2.sock 00:05:39.939 09:43:28 -- common/autotest_common.sh@829 -- # '[' -z 58009 ']' 00:05:39.939 09:43:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.939 09:43:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.939 09:43:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.939 09:43:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.939 09:43:28 -- common/autotest_common.sh@10 -- # set +x 00:05:39.939 [2024-12-15 09:43:28.830690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.939 [2024-12-15 09:43:28.830954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:05:40.201 [2024-12-15 09:43:28.988502] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57991 has claimed it. 00:05:40.201 [2024-12-15 09:43:28.988553] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.459 ERROR: process (pid: 58009) is no longer running 00:05:40.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (58009) - No such process 00:05:40.459 09:43:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.459 09:43:29 -- common/autotest_common.sh@862 -- # return 1 00:05:40.459 09:43:29 -- common/autotest_common.sh@653 -- # es=1 00:05:40.459 09:43:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.459 09:43:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.459 09:43:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.459 09:43:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.459 09:43:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.459 09:43:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.459 09:43:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.459 09:43:29 -- event/cpu_locks.sh@141 -- # killprocess 57991 00:05:40.459 09:43:29 -- common/autotest_common.sh@936 -- # '[' -z 57991 ']' 00:05:40.459 09:43:29 -- common/autotest_common.sh@940 -- # kill -0 57991 00:05:40.459 09:43:29 -- common/autotest_common.sh@941 -- # uname 00:05:40.459 09:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.459 09:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57991 00:05:40.717 09:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.717 09:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.717 09:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57991' 00:05:40.717 killing process with pid 57991 00:05:40.717 09:43:29 -- common/autotest_common.sh@955 -- # kill 57991 00:05:40.717 09:43:29 -- common/autotest_common.sh@960 -- # wait 57991 00:05:42.092 00:05:42.092 real 0m2.782s 00:05:42.092 user 0m7.283s 00:05:42.092 sys 0m0.393s 00:05:42.092 09:43:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.092 09:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.092 ************************************ 00:05:42.092 END TEST locking_overlapped_coremask 00:05:42.092 ************************************ 00:05:42.092 09:43:30 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:42.092 09:43:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.092 09:43:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.092 09:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.092 ************************************ 00:05:42.092 START TEST locking_overlapped_coremask_via_rpc 00:05:42.092 ************************************ 00:05:42.092 09:43:30 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:42.092 09:43:30 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58062 00:05:42.092 09:43:30 -- event/cpu_locks.sh@149 -- # waitforlisten 58062 /var/tmp/spdk.sock 00:05:42.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.092 09:43:30 -- common/autotest_common.sh@829 -- # '[' -z 58062 ']' 00:05:42.092 09:43:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.092 09:43:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.092 09:43:30 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:42.092 09:43:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.092 09:43:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.092 09:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.092 [2024-12-15 09:43:30.804559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.092 [2024-12-15 09:43:30.804805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58062 ] 00:05:42.092 [2024-12-15 09:43:30.952308] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.092 [2024-12-15 09:43:30.952455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.092 [2024-12-15 09:43:31.097567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.092 [2024-12-15 09:43:31.098119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.092 [2024-12-15 09:43:31.098432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.092 [2024-12-15 09:43:31.098453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.658 09:43:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.658 09:43:31 -- common/autotest_common.sh@862 -- # return 0 00:05:42.658 09:43:31 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58080 00:05:42.658 09:43:31 -- event/cpu_locks.sh@153 -- # waitforlisten 58080 /var/tmp/spdk2.sock 00:05:42.658 09:43:31 -- common/autotest_common.sh@829 -- # '[' -z 58080 ']' 00:05:42.658 09:43:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.659 09:43:31 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.659 09:43:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.659 09:43:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.659 09:43:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.659 09:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.917 [2024-12-15 09:43:31.686053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.917 [2024-12-15 09:43:31.686535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58080 ] 00:05:42.917 [2024-12-15 09:43:31.835375] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.917 [2024-12-15 09:43:31.835413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.175 [2024-12-15 09:43:32.131147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.175 [2024-12-15 09:43:32.131528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.175 [2024-12-15 09:43:32.135337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.175 [2024-12-15 09:43:32.135363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:44.548 09:43:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.548 09:43:33 -- common/autotest_common.sh@862 -- # return 0 00:05:44.548 09:43:33 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.548 09:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.548 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 09:43:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.548 09:43:33 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.548 09:43:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:44.548 09:43:33 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.548 09:43:33 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:44.548 09:43:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.548 09:43:33 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:44.548 09:43:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.549 09:43:33 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.549 09:43:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.549 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.549 [2024-12-15 09:43:33.204392] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58062 has claimed it. 00:05:44.549 request: 00:05:44.549 { 00:05:44.549 "method": "framework_enable_cpumask_locks", 00:05:44.549 "req_id": 1 00:05:44.549 } 00:05:44.549 Got JSON-RPC error response 00:05:44.549 response: 00:05:44.549 { 00:05:44.549 "code": -32603, 00:05:44.549 "message": "Failed to claim CPU core: 2" 00:05:44.549 } 00:05:44.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.549 09:43:33 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:44.549 09:43:33 -- common/autotest_common.sh@653 -- # es=1 00:05:44.549 09:43:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.549 09:43:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.549 09:43:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.549 09:43:33 -- event/cpu_locks.sh@158 -- # waitforlisten 58062 /var/tmp/spdk.sock 00:05:44.549 09:43:33 -- common/autotest_common.sh@829 -- # '[' -z 58062 ']' 00:05:44.549 09:43:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.549 09:43:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.549 09:43:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.549 09:43:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.549 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.549 09:43:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.549 09:43:33 -- common/autotest_common.sh@862 -- # return 0 00:05:44.549 09:43:33 -- event/cpu_locks.sh@159 -- # waitforlisten 58080 /var/tmp/spdk2.sock 00:05:44.549 09:43:33 -- common/autotest_common.sh@829 -- # '[' -z 58080 ']' 00:05:44.549 09:43:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.549 09:43:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.549 09:43:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.549 09:43:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.549 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.806 ************************************ 00:05:44.806 END TEST locking_overlapped_coremask_via_rpc 00:05:44.806 ************************************ 00:05:44.806 09:43:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.806 09:43:33 -- common/autotest_common.sh@862 -- # return 0 00:05:44.806 09:43:33 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:44.807 09:43:33 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.807 09:43:33 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.807 09:43:33 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.807 00:05:44.807 real 0m2.859s 00:05:44.807 user 0m1.156s 00:05:44.807 sys 0m0.136s 00:05:44.807 09:43:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.807 09:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.807 09:43:33 -- event/cpu_locks.sh@174 -- # cleanup 00:05:44.807 09:43:33 -- event/cpu_locks.sh@15 -- # [[ -z 58062 ]] 00:05:44.807 09:43:33 -- event/cpu_locks.sh@15 -- # killprocess 58062 00:05:44.807 09:43:33 -- common/autotest_common.sh@936 -- # '[' -z 58062 ']' 00:05:44.807 09:43:33 -- common/autotest_common.sh@940 -- # kill -0 58062 00:05:44.807 09:43:33 -- common/autotest_common.sh@941 -- # uname 00:05:44.807 09:43:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.807 09:43:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58062 00:05:44.807 killing process with pid 58062 00:05:44.807 09:43:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.807 09:43:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.807 09:43:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58062' 00:05:44.807 09:43:33 -- common/autotest_common.sh@955 -- # kill 58062 00:05:44.807 09:43:33 -- common/autotest_common.sh@960 -- # wait 58062 00:05:46.180 09:43:34 -- event/cpu_locks.sh@16 -- # [[ -z 58080 ]] 00:05:46.180 09:43:34 -- event/cpu_locks.sh@16 -- # killprocess 58080 00:05:46.180 09:43:34 -- common/autotest_common.sh@936 -- # '[' -z 58080 ']' 00:05:46.180 09:43:34 -- common/autotest_common.sh@940 -- # kill -0 58080 00:05:46.180 09:43:34 -- common/autotest_common.sh@941 -- # uname 00:05:46.180 09:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.180 09:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58080 00:05:46.180 killing process with pid 58080 00:05:46.180 09:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:46.180 09:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:46.180 09:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58080' 00:05:46.180 09:43:34 -- common/autotest_common.sh@955 -- # kill 58080 00:05:46.180 09:43:34 -- common/autotest_common.sh@960 -- # wait 58080 00:05:47.113 09:43:36 -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.113 Process with pid 58062 is not found 00:05:47.113 Process with pid 58080 is not found 00:05:47.113 09:43:36 -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.113 09:43:36 -- event/cpu_locks.sh@15 -- # [[ -z 58062 ]] 00:05:47.113 09:43:36 -- event/cpu_locks.sh@15 -- # killprocess 58062 00:05:47.113 09:43:36 -- common/autotest_common.sh@936 -- # '[' -z 58062 ']' 00:05:47.113 09:43:36 -- common/autotest_common.sh@940 -- # kill -0 58062 00:05:47.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58062) - No such process 00:05:47.113 09:43:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58062 is not found' 00:05:47.113 09:43:36 -- event/cpu_locks.sh@16 -- # [[ -z 58080 ]] 00:05:47.113 09:43:36 -- event/cpu_locks.sh@16 -- # killprocess 58080 00:05:47.113 09:43:36 -- common/autotest_common.sh@936 -- # '[' -z 58080 ']' 00:05:47.113 09:43:36 -- common/autotest_common.sh@940 -- # kill -0 58080 00:05:47.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58080) - No such process 00:05:47.113 09:43:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58080 is not found' 00:05:47.113 09:43:36 -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.113 ************************************ 00:05:47.113 END TEST cpu_locks 00:05:47.113 ************************************ 00:05:47.113 00:05:47.113 real 0m31.952s 00:05:47.113 user 0m52.972s 00:05:47.113 sys 0m4.321s 00:05:47.113 09:43:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.113 09:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.113 ************************************ 00:05:47.113 END TEST event 00:05:47.113 ************************************ 00:05:47.113 00:05:47.113 real 0m59.269s 00:05:47.113 user 1m45.808s 00:05:47.113 sys 0m7.025s 00:05:47.113 09:43:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.113 09:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.113 09:43:36 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.113 09:43:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.113 09:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.113 09:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.113 ************************************ 00:05:47.113 START TEST thread 00:05:47.113 ************************************ 00:05:47.371 09:43:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.371 * Looking for test storage... 00:05:47.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:47.371 09:43:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.371 09:43:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.371 09:43:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.371 09:43:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.371 09:43:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.371 09:43:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.371 09:43:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.371 09:43:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.371 09:43:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.371 09:43:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.371 09:43:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.371 09:43:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.371 09:43:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.371 09:43:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.371 09:43:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.371 09:43:36 -- scripts/common.sh@344 -- # : 1 00:05:47.371 09:43:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.371 09:43:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.371 09:43:36 -- scripts/common.sh@364 -- # decimal 1 00:05:47.371 09:43:36 -- scripts/common.sh@352 -- # local d=1 00:05:47.371 09:43:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.371 09:43:36 -- scripts/common.sh@354 -- # echo 1 00:05:47.371 09:43:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.371 09:43:36 -- scripts/common.sh@365 -- # decimal 2 00:05:47.371 09:43:36 -- scripts/common.sh@352 -- # local d=2 00:05:47.371 09:43:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.371 09:43:36 -- scripts/common.sh@354 -- # echo 2 00:05:47.371 09:43:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.371 09:43:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.371 09:43:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.371 09:43:36 -- scripts/common.sh@367 -- # return 0 00:05:47.371 09:43:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 09:43:36 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.371 09:43:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:47.371 09:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.371 09:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 START TEST thread_poller_perf 00:05:47.371 ************************************ 00:05:47.371 09:43:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.371 [2024-12-15 09:43:36.290281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.371 [2024-12-15 09:43:36.290367] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58231 ] 00:05:47.629 [2024-12-15 09:43:36.431174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.629 [2024-12-15 09:43:36.572533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.629 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.044 [2024-12-15T09:43:38.060Z] ====================================== 00:05:49.044 [2024-12-15T09:43:38.060Z] busy:2607987832 (cyc) 00:05:49.044 [2024-12-15T09:43:38.060Z] total_run_count: 385000 00:05:49.044 [2024-12-15T09:43:38.060Z] tsc_hz: 2600000000 (cyc) 00:05:49.044 [2024-12-15T09:43:38.060Z] ====================================== 00:05:49.044 [2024-12-15T09:43:38.060Z] poller_cost: 6773 (cyc), 2605 (nsec) 00:05:49.044 00:05:49.044 real 0m1.525s 00:05:49.044 user 0m1.351s 00:05:49.044 sys 0m0.065s 00:05:49.044 ************************************ 00:05:49.044 END TEST thread_poller_perf 00:05:49.044 ************************************ 00:05:49.044 09:43:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.044 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 09:43:37 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.044 09:43:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:49.044 09:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.044 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 ************************************ 00:05:49.044 START TEST thread_poller_perf 00:05:49.044 ************************************ 00:05:49.044 09:43:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.044 [2024-12-15 09:43:37.855711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.044 [2024-12-15 09:43:37.855824] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:05:49.044 [2024-12-15 09:43:38.003455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.302 [2024-12-15 09:43:38.142323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.302 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:50.675 [2024-12-15T09:43:39.691Z] ====================================== 00:05:50.675 [2024-12-15T09:43:39.691Z] busy:2603579936 (cyc) 00:05:50.675 [2024-12-15T09:43:39.691Z] total_run_count: 5336000 00:05:50.675 [2024-12-15T09:43:39.691Z] tsc_hz: 2600000000 (cyc) 00:05:50.675 [2024-12-15T09:43:39.691Z] ====================================== 00:05:50.675 [2024-12-15T09:43:39.691Z] poller_cost: 487 (cyc), 187 (nsec) 00:05:50.675 00:05:50.675 real 0m1.524s 00:05:50.675 user 0m1.340s 00:05:50.675 sys 0m0.077s 00:05:50.675 09:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.675 09:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.675 ************************************ 00:05:50.675 END TEST thread_poller_perf 00:05:50.675 ************************************ 00:05:50.675 09:43:39 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:50.675 00:05:50.675 real 0m3.260s 00:05:50.675 user 0m2.784s 00:05:50.675 sys 0m0.265s 00:05:50.675 09:43:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.675 09:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.675 ************************************ 00:05:50.675 END TEST thread 00:05:50.675 ************************************ 00:05:50.675 09:43:39 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:50.675 09:43:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.675 09:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.675 09:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.675 ************************************ 00:05:50.675 START TEST accel 00:05:50.675 ************************************ 00:05:50.675 09:43:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:50.675 * Looking for test storage... 00:05:50.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:50.675 09:43:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.675 09:43:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.675 09:43:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.675 09:43:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.675 09:43:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.675 09:43:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.675 09:43:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.675 09:43:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.675 09:43:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.675 09:43:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.675 09:43:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.675 09:43:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.675 09:43:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.675 09:43:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.675 09:43:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.675 09:43:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.675 09:43:39 -- scripts/common.sh@344 -- # : 1 00:05:50.675 09:43:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.675 09:43:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.675 09:43:39 -- scripts/common.sh@364 -- # decimal 1 00:05:50.675 09:43:39 -- scripts/common.sh@352 -- # local d=1 00:05:50.675 09:43:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.675 09:43:39 -- scripts/common.sh@354 -- # echo 1 00:05:50.675 09:43:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.675 09:43:39 -- scripts/common.sh@365 -- # decimal 2 00:05:50.675 09:43:39 -- scripts/common.sh@352 -- # local d=2 00:05:50.675 09:43:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.675 09:43:39 -- scripts/common.sh@354 -- # echo 2 00:05:50.675 09:43:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.675 09:43:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.675 09:43:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.675 09:43:39 -- scripts/common.sh@367 -- # return 0 00:05:50.676 09:43:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.676 09:43:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.676 --rc genhtml_branch_coverage=1 00:05:50.676 --rc genhtml_function_coverage=1 00:05:50.676 --rc genhtml_legend=1 00:05:50.676 --rc geninfo_all_blocks=1 00:05:50.676 --rc geninfo_unexecuted_blocks=1 00:05:50.676 00:05:50.676 ' 00:05:50.676 09:43:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.676 --rc genhtml_branch_coverage=1 00:05:50.676 --rc genhtml_function_coverage=1 00:05:50.676 --rc genhtml_legend=1 00:05:50.676 --rc geninfo_all_blocks=1 00:05:50.676 --rc geninfo_unexecuted_blocks=1 00:05:50.676 00:05:50.676 ' 00:05:50.676 09:43:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.676 --rc genhtml_branch_coverage=1 00:05:50.676 --rc genhtml_function_coverage=1 00:05:50.676 --rc genhtml_legend=1 00:05:50.676 --rc geninfo_all_blocks=1 00:05:50.676 --rc geninfo_unexecuted_blocks=1 00:05:50.676 00:05:50.676 ' 00:05:50.676 09:43:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.676 --rc genhtml_branch_coverage=1 00:05:50.676 --rc genhtml_function_coverage=1 00:05:50.676 --rc genhtml_legend=1 00:05:50.676 --rc geninfo_all_blocks=1 00:05:50.676 --rc geninfo_unexecuted_blocks=1 00:05:50.676 00:05:50.676 ' 00:05:50.676 09:43:39 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:50.676 09:43:39 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:50.676 09:43:39 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.676 09:43:39 -- accel/accel.sh@59 -- # spdk_tgt_pid=58362 00:05:50.676 09:43:39 -- accel/accel.sh@60 -- # waitforlisten 58362 00:05:50.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.676 09:43:39 -- common/autotest_common.sh@829 -- # '[' -z 58362 ']' 00:05:50.676 09:43:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.676 09:43:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.676 09:43:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.676 09:43:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.676 09:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:50.676 09:43:39 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:50.676 09:43:39 -- accel/accel.sh@58 -- # build_accel_config 00:05:50.676 09:43:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.676 09:43:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.676 09:43:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.676 09:43:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.676 09:43:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.676 09:43:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.676 09:43:39 -- accel/accel.sh@42 -- # jq -r . 00:05:50.676 [2024-12-15 09:43:39.630654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.676 [2024-12-15 09:43:39.630770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58362 ] 00:05:50.933 [2024-12-15 09:43:39.780192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.933 [2024-12-15 09:43:39.922762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.933 [2024-12-15 09:43:39.922918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.499 09:43:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.499 09:43:40 -- common/autotest_common.sh@862 -- # return 0 00:05:51.499 09:43:40 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:51.499 09:43:40 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:51.499 09:43:40 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:51.499 09:43:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.499 09:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.499 09:43:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.499 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.499 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.499 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.500 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.500 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.500 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.500 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.500 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.500 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.500 09:43:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.500 09:43:40 -- accel/accel.sh@64 -- # IFS== 00:05:51.500 09:43:40 -- accel/accel.sh@64 -- # read -r opc module 00:05:51.500 09:43:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:51.500 09:43:40 -- accel/accel.sh@67 -- # killprocess 58362 00:05:51.500 09:43:40 -- common/autotest_common.sh@936 -- # '[' -z 58362 ']' 00:05:51.500 09:43:40 -- common/autotest_common.sh@940 -- # kill -0 58362 00:05:51.500 09:43:40 -- common/autotest_common.sh@941 -- # uname 00:05:51.500 09:43:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.500 09:43:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58362 00:05:51.500 09:43:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.500 killing process with pid 58362 00:05:51.500 09:43:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.500 09:43:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58362' 00:05:51.500 09:43:40 -- common/autotest_common.sh@955 -- # kill 58362 00:05:51.500 09:43:40 -- common/autotest_common.sh@960 -- # wait 58362 00:05:52.872 09:43:41 -- accel/accel.sh@68 -- # trap - ERR 00:05:52.872 09:43:41 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:52.872 09:43:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:52.872 09:43:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.872 09:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.872 09:43:41 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:52.872 09:43:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:52.872 09:43:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.872 09:43:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.872 09:43:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.872 09:43:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.872 09:43:41 -- accel/accel.sh@42 -- # jq -r . 00:05:52.872 09:43:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.872 09:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.872 09:43:41 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:52.872 09:43:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.872 09:43:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.872 09:43:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.872 ************************************ 00:05:52.872 START TEST accel_missing_filename 00:05:52.872 ************************************ 00:05:52.872 09:43:41 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:52.872 09:43:41 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.872 09:43:41 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:52.872 09:43:41 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:52.872 09:43:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.872 09:43:41 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:52.872 09:43:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.872 09:43:41 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:52.872 09:43:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.872 09:43:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.872 09:43:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.872 09:43:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.872 09:43:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.872 09:43:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.872 09:43:41 -- accel/accel.sh@42 -- # jq -r . 00:05:52.872 [2024-12-15 09:43:41.830067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.872 [2024-12-15 09:43:41.830171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:05:53.131 [2024-12-15 09:43:41.976484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.131 [2024-12-15 09:43:42.118680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.389 [2024-12-15 09:43:42.229566] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.647 [2024-12-15 09:43:42.490298] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:53.906 A filename is required. 00:05:53.906 09:43:42 -- common/autotest_common.sh@653 -- # es=234 00:05:53.906 09:43:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.906 09:43:42 -- common/autotest_common.sh@662 -- # es=106 00:05:53.906 09:43:42 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:53.906 09:43:42 -- common/autotest_common.sh@670 -- # es=1 00:05:53.906 09:43:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.906 00:05:53.906 real 0m0.902s 00:05:53.906 user 0m0.719s 00:05:53.906 sys 0m0.106s 00:05:53.906 09:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.906 ************************************ 00:05:53.906 END TEST accel_missing_filename 00:05:53.906 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.906 ************************************ 00:05:53.906 09:43:42 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.906 09:43:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:53.906 09:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.906 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.906 ************************************ 00:05:53.906 START TEST accel_compress_verify 00:05:53.906 ************************************ 00:05:53.906 09:43:42 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.906 09:43:42 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.906 09:43:42 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.906 09:43:42 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:53.906 09:43:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.906 09:43:42 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:53.906 09:43:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.906 09:43:42 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.906 09:43:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.906 09:43:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.906 09:43:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.906 09:43:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.906 09:43:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.906 09:43:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.906 09:43:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.906 09:43:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.906 09:43:42 -- accel/accel.sh@42 -- # jq -r . 00:05:53.906 [2024-12-15 09:43:42.766792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.906 [2024-12-15 09:43:42.767199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58452 ] 00:05:54.164 [2024-12-15 09:43:42.922688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.164 [2024-12-15 09:43:43.125304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.421 [2024-12-15 09:43:43.277717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.680 [2024-12-15 09:43:43.613373] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:54.941 00:05:54.941 Compression does not support the verify option, aborting. 00:05:54.941 09:43:43 -- common/autotest_common.sh@653 -- # es=161 00:05:54.941 09:43:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.941 09:43:43 -- common/autotest_common.sh@662 -- # es=33 00:05:54.941 09:43:43 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:54.941 09:43:43 -- common/autotest_common.sh@670 -- # es=1 00:05:54.941 09:43:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.941 00:05:54.941 real 0m1.152s 00:05:54.941 user 0m0.933s 00:05:54.941 sys 0m0.136s 00:05:54.941 09:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.941 ************************************ 00:05:54.941 END TEST accel_compress_verify 00:05:54.941 ************************************ 00:05:54.941 09:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.941 09:43:43 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:54.941 09:43:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:54.941 09:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.941 09:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:54.941 ************************************ 00:05:54.941 START TEST accel_wrong_workload 00:05:54.941 ************************************ 00:05:54.941 09:43:43 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:54.941 09:43:43 -- common/autotest_common.sh@650 -- # local es=0 00:05:54.941 09:43:43 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:54.941 09:43:43 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:54.941 09:43:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.941 09:43:43 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:54.941 09:43:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.941 09:43:43 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:54.941 09:43:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:54.941 09:43:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.941 09:43:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.941 09:43:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.941 09:43:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.941 09:43:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.941 09:43:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.941 09:43:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.941 09:43:43 -- accel/accel.sh@42 -- # jq -r . 00:05:55.202 Unsupported workload type: foobar 00:05:55.202 [2024-12-15 09:43:43.976204] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:55.202 accel_perf options: 00:05:55.202 [-h help message] 00:05:55.202 [-q queue depth per core] 00:05:55.203 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.203 [-T number of threads per core 00:05:55.203 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.203 [-t time in seconds] 00:05:55.203 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.203 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.203 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.203 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.203 [-S for crc32c workload, use this seed value (default 0) 00:05:55.203 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.203 [-f for fill workload, use this BYTE value (default 255) 00:05:55.203 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.203 [-y verify result if this switch is on] 00:05:55.203 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.203 Can be used to spread operations across a wider range of memory. 00:05:55.203 09:43:43 -- common/autotest_common.sh@653 -- # es=1 00:05:55.203 09:43:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.203 09:43:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.203 09:43:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.203 00:05:55.203 real 0m0.050s 00:05:55.203 user 0m0.050s 00:05:55.203 sys 0m0.029s 00:05:55.203 09:43:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.203 ************************************ 00:05:55.203 END TEST accel_wrong_workload 00:05:55.203 09:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:55.203 ************************************ 00:05:55.203 09:43:44 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.203 09:43:44 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:55.203 09:43:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.203 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:55.203 ************************************ 00:05:55.203 START TEST accel_negative_buffers 00:05:55.203 ************************************ 00:05:55.203 09:43:44 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.203 09:43:44 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.203 09:43:44 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:55.203 09:43:44 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:55.203 09:43:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.203 09:43:44 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:55.203 09:43:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.203 09:43:44 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:55.203 09:43:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:55.203 09:43:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.203 09:43:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.203 09:43:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.203 09:43:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.203 09:43:44 -- accel/accel.sh@42 -- # jq -r . 00:05:55.203 -x option must be non-negative. 00:05:55.203 [2024-12-15 09:43:44.074124] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:55.203 accel_perf options: 00:05:55.203 [-h help message] 00:05:55.203 [-q queue depth per core] 00:05:55.203 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.203 [-T number of threads per core 00:05:55.203 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.203 [-t time in seconds] 00:05:55.203 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.203 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.203 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.203 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.203 [-S for crc32c workload, use this seed value (default 0) 00:05:55.203 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.203 [-f for fill workload, use this BYTE value (default 255) 00:05:55.203 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.203 [-y verify result if this switch is on] 00:05:55.203 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.203 Can be used to spread operations across a wider range of memory. 00:05:55.203 09:43:44 -- common/autotest_common.sh@653 -- # es=1 00:05:55.203 09:43:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.203 09:43:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.203 09:43:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.203 00:05:55.203 real 0m0.059s 00:05:55.203 user 0m0.060s 00:05:55.203 sys 0m0.028s 00:05:55.203 ************************************ 00:05:55.203 END TEST accel_negative_buffers 00:05:55.203 ************************************ 00:05:55.203 09:43:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.203 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:55.203 09:43:44 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:55.203 09:43:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:55.203 09:43:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.203 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:55.203 ************************************ 00:05:55.203 START TEST accel_crc32c 00:05:55.203 ************************************ 00:05:55.203 09:43:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:55.203 09:43:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.203 09:43:44 -- accel/accel.sh@17 -- # local accel_module 00:05:55.203 09:43:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:55.203 09:43:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:55.203 09:43:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.203 09:43:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.203 09:43:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.203 09:43:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.203 09:43:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.203 09:43:44 -- accel/accel.sh@42 -- # jq -r . 00:05:55.203 [2024-12-15 09:43:44.184866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.203 [2024-12-15 09:43:44.185089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58519 ] 00:05:55.464 [2024-12-15 09:43:44.336910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.725 [2024-12-15 09:43:44.534151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.636 09:43:46 -- accel/accel.sh@18 -- # out=' 00:05:57.636 SPDK Configuration: 00:05:57.636 Core mask: 0x1 00:05:57.636 00:05:57.636 Accel Perf Configuration: 00:05:57.636 Workload Type: crc32c 00:05:57.636 CRC-32C seed: 32 00:05:57.636 Transfer size: 4096 bytes 00:05:57.636 Vector count 1 00:05:57.636 Module: software 00:05:57.636 Queue depth: 32 00:05:57.636 Allocate depth: 32 00:05:57.636 # threads/core: 1 00:05:57.636 Run time: 1 seconds 00:05:57.636 Verify: Yes 00:05:57.636 00:05:57.636 Running for 1 seconds... 00:05:57.636 00:05:57.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.636 ------------------------------------------------------------------------------------ 00:05:57.637 0,0 455232/s 1778 MiB/s 0 0 00:05:57.637 ==================================================================================== 00:05:57.637 Total 455232/s 1778 MiB/s 0 0' 00:05:57.637 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:57.637 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:57.637 09:43:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:57.637 09:43:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:57.637 09:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.637 09:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.637 09:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.637 09:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.637 09:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.637 09:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.637 09:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.637 09:43:46 -- accel/accel.sh@42 -- # jq -r . 00:05:57.637 [2024-12-15 09:43:46.390624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.637 [2024-12-15 09:43:46.390757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58545 ] 00:05:57.637 [2024-12-15 09:43:46.546956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.897 [2024-12-15 09:43:46.789008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val=0x1 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val=crc32c 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val=32 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.155 09:43:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.155 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.155 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val=software 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val=32 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val=32 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val=1 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val=Yes 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:58.156 09:43:46 -- accel/accel.sh@21 -- # val= 00:05:58.156 09:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # IFS=: 00:05:58.156 09:43:46 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 09:43:48 -- accel/accel.sh@21 -- # val= 00:05:59.532 09:43:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # IFS=: 00:05:59.532 09:43:48 -- accel/accel.sh@20 -- # read -r var val 00:05:59.532 ************************************ 00:05:59.532 END TEST accel_crc32c 00:05:59.532 ************************************ 00:05:59.532 09:43:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.532 09:43:48 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:59.532 09:43:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.532 00:05:59.532 real 0m4.284s 00:05:59.532 user 0m3.784s 00:05:59.532 sys 0m0.283s 00:05:59.532 09:43:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.532 09:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.532 09:43:48 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:59.532 09:43:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:59.532 09:43:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.532 09:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.532 ************************************ 00:05:59.532 START TEST accel_crc32c_C2 00:05:59.532 ************************************ 00:05:59.532 09:43:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:59.532 09:43:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.532 09:43:48 -- accel/accel.sh@17 -- # local accel_module 00:05:59.532 09:43:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:59.532 09:43:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:59.532 09:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.532 09:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.532 09:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.532 09:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.532 09:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.532 09:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.532 09:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.532 09:43:48 -- accel/accel.sh@42 -- # jq -r . 00:05:59.532 [2024-12-15 09:43:48.522506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.532 [2024-12-15 09:43:48.522584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58586 ] 00:05:59.790 [2024-12-15 09:43:48.662708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.048 [2024-12-15 09:43:48.811403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.524 09:43:50 -- accel/accel.sh@18 -- # out=' 00:06:01.524 SPDK Configuration: 00:06:01.524 Core mask: 0x1 00:06:01.524 00:06:01.524 Accel Perf Configuration: 00:06:01.524 Workload Type: crc32c 00:06:01.524 CRC-32C seed: 0 00:06:01.524 Transfer size: 4096 bytes 00:06:01.524 Vector count 2 00:06:01.525 Module: software 00:06:01.525 Queue depth: 32 00:06:01.525 Allocate depth: 32 00:06:01.525 # threads/core: 1 00:06:01.525 Run time: 1 seconds 00:06:01.525 Verify: Yes 00:06:01.525 00:06:01.525 Running for 1 seconds... 00:06:01.525 00:06:01.525 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.525 ------------------------------------------------------------------------------------ 00:06:01.525 0,0 482336/s 3768 MiB/s 0 0 00:06:01.525 ==================================================================================== 00:06:01.525 Total 482336/s 1884 MiB/s 0 0' 00:06:01.525 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:01.525 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:01.525 09:43:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.525 09:43:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.525 09:43:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.525 09:43:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.525 09:43:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.525 09:43:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.525 09:43:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.525 09:43:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.525 09:43:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.525 09:43:50 -- accel/accel.sh@42 -- # jq -r . 00:06:01.525 [2024-12-15 09:43:50.447708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.525 [2024-12-15 09:43:50.447955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58612 ] 00:06:01.786 [2024-12-15 09:43:50.597833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.786 [2024-12-15 09:43:50.780504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=0x1 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=crc32c 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=0 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=software 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=32 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=32 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=1 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val=Yes 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.047 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.047 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.047 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.048 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:02.048 09:43:50 -- accel/accel.sh@21 -- # val= 00:06:02.048 09:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.048 09:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:02.048 09:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 09:43:52 -- accel/accel.sh@21 -- # val= 00:06:03.953 09:43:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:03.953 09:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.953 ************************************ 00:06:03.953 END TEST accel_crc32c_C2 00:06:03.953 ************************************ 00:06:03.953 09:43:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.953 09:43:52 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:03.953 09:43:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.953 00:06:03.953 real 0m4.029s 00:06:03.953 user 0m3.604s 00:06:03.953 sys 0m0.220s 00:06:03.953 09:43:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.953 09:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 09:43:52 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:03.953 09:43:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:03.953 09:43:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.953 09:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.953 ************************************ 00:06:03.953 START TEST accel_copy 00:06:03.953 ************************************ 00:06:03.953 09:43:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:03.953 09:43:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.953 09:43:52 -- accel/accel.sh@17 -- # local accel_module 00:06:03.953 09:43:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:03.953 09:43:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:03.953 09:43:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.953 09:43:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.953 09:43:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.953 09:43:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.953 09:43:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.953 09:43:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.953 09:43:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.954 09:43:52 -- accel/accel.sh@42 -- # jq -r . 00:06:03.954 [2024-12-15 09:43:52.614492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.954 [2024-12-15 09:43:52.614591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:06:03.954 [2024-12-15 09:43:52.762320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.954 [2024-12-15 09:43:52.938503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.870 09:43:54 -- accel/accel.sh@18 -- # out=' 00:06:05.870 SPDK Configuration: 00:06:05.870 Core mask: 0x1 00:06:05.870 00:06:05.870 Accel Perf Configuration: 00:06:05.870 Workload Type: copy 00:06:05.870 Transfer size: 4096 bytes 00:06:05.870 Vector count 1 00:06:05.870 Module: software 00:06:05.870 Queue depth: 32 00:06:05.870 Allocate depth: 32 00:06:05.870 # threads/core: 1 00:06:05.870 Run time: 1 seconds 00:06:05.870 Verify: Yes 00:06:05.870 00:06:05.870 Running for 1 seconds... 00:06:05.870 00:06:05.870 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.870 ------------------------------------------------------------------------------------ 00:06:05.870 0,0 282624/s 1104 MiB/s 0 0 00:06:05.870 ==================================================================================== 00:06:05.870 Total 282624/s 1104 MiB/s 0 0' 00:06:05.870 09:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:05.870 09:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:05.870 09:43:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:05.870 09:43:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.870 09:43:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.870 09:43:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.870 09:43:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.870 09:43:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.870 09:43:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.870 09:43:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.870 09:43:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.870 09:43:54 -- accel/accel.sh@42 -- # jq -r . 00:06:05.870 [2024-12-15 09:43:54.720502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.870 [2024-12-15 09:43:54.720736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58680 ] 00:06:05.870 [2024-12-15 09:43:54.868147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.132 [2024-12-15 09:43:55.044183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=0x1 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=copy 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=software 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=32 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=32 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=1 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val=Yes 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.392 09:43:55 -- accel/accel.sh@21 -- # val= 00:06:06.392 09:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:06.392 09:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:07.778 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:07.778 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.778 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:08.039 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:08.039 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:08.039 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:08.039 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@21 -- # val= 00:06:08.039 09:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:08.039 09:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:08.039 09:43:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.039 09:43:56 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:08.039 09:43:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.039 00:06:08.039 real 0m4.232s 00:06:08.039 user 0m1.887s 00:06:08.039 sys 0m0.123s 00:06:08.039 09:43:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.039 09:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:08.039 ************************************ 00:06:08.039 END TEST accel_copy 00:06:08.039 ************************************ 00:06:08.039 09:43:56 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.039 09:43:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:08.039 09:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.039 09:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:08.039 ************************************ 00:06:08.039 START TEST accel_fill 00:06:08.039 ************************************ 00:06:08.039 09:43:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.039 09:43:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.039 09:43:56 -- accel/accel.sh@17 -- # local accel_module 00:06:08.039 09:43:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.039 09:43:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.039 09:43:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.039 09:43:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.039 09:43:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.039 09:43:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.039 09:43:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.039 09:43:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.039 09:43:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.039 09:43:56 -- accel/accel.sh@42 -- # jq -r . 00:06:08.039 [2024-12-15 09:43:56.904018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.039 [2024-12-15 09:43:56.904128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58726 ] 00:06:08.039 [2024-12-15 09:43:57.053681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.300 [2024-12-15 09:43:57.232512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.217 09:43:58 -- accel/accel.sh@18 -- # out=' 00:06:10.217 SPDK Configuration: 00:06:10.217 Core mask: 0x1 00:06:10.217 00:06:10.217 Accel Perf Configuration: 00:06:10.217 Workload Type: fill 00:06:10.217 Fill pattern: 0x80 00:06:10.217 Transfer size: 4096 bytes 00:06:10.217 Vector count 1 00:06:10.217 Module: software 00:06:10.217 Queue depth: 64 00:06:10.217 Allocate depth: 64 00:06:10.217 # threads/core: 1 00:06:10.217 Run time: 1 seconds 00:06:10.217 Verify: Yes 00:06:10.217 00:06:10.217 Running for 1 seconds... 00:06:10.217 00:06:10.217 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.217 ------------------------------------------------------------------------------------ 00:06:10.217 0,0 456768/s 1784 MiB/s 0 0 00:06:10.217 ==================================================================================== 00:06:10.217 Total 456768/s 1784 MiB/s 0 0' 00:06:10.217 09:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:10.217 09:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:10.217 09:43:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.217 09:43:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.217 09:43:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.217 09:43:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.217 09:43:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.217 09:43:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.217 09:43:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.217 09:43:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.217 09:43:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.217 09:43:58 -- accel/accel.sh@42 -- # jq -r . 00:06:10.217 [2024-12-15 09:43:59.023955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.217 [2024-12-15 09:43:59.024068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58752 ] 00:06:10.217 [2024-12-15 09:43:59.173821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.478 [2024-12-15 09:43:59.385532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=0x1 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=fill 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=0x80 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=software 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=64 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=64 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=1 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val=Yes 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:10.740 09:43:59 -- accel/accel.sh@21 -- # val= 00:06:10.740 09:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:10.740 09:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 09:44:01 -- accel/accel.sh@21 -- # val= 00:06:12.655 09:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:12.655 09:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:12.655 ************************************ 00:06:12.655 END TEST accel_fill 00:06:12.655 ************************************ 00:06:12.655 09:44:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.655 09:44:01 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:12.655 09:44:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.655 00:06:12.655 real 0m4.326s 00:06:12.655 user 0m3.829s 00:06:12.655 sys 0m0.284s 00:06:12.655 09:44:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.655 09:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 09:44:01 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:12.655 09:44:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.655 09:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.655 09:44:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 ************************************ 00:06:12.655 START TEST accel_copy_crc32c 00:06:12.655 ************************************ 00:06:12.655 09:44:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:12.655 09:44:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.655 09:44:01 -- accel/accel.sh@17 -- # local accel_module 00:06:12.655 09:44:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:12.655 09:44:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:12.655 09:44:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.655 09:44:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.655 09:44:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.655 09:44:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.655 09:44:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.655 09:44:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.655 09:44:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.655 09:44:01 -- accel/accel.sh@42 -- # jq -r . 00:06:12.655 [2024-12-15 09:44:01.287054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.655 [2024-12-15 09:44:01.287166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:06:12.655 [2024-12-15 09:44:01.435637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.655 [2024-12-15 09:44:01.572465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.554 09:44:03 -- accel/accel.sh@18 -- # out=' 00:06:14.554 SPDK Configuration: 00:06:14.554 Core mask: 0x1 00:06:14.554 00:06:14.554 Accel Perf Configuration: 00:06:14.554 Workload Type: copy_crc32c 00:06:14.554 CRC-32C seed: 0 00:06:14.554 Vector size: 4096 bytes 00:06:14.554 Transfer size: 4096 bytes 00:06:14.554 Vector count 1 00:06:14.554 Module: software 00:06:14.554 Queue depth: 32 00:06:14.554 Allocate depth: 32 00:06:14.554 # threads/core: 1 00:06:14.554 Run time: 1 seconds 00:06:14.554 Verify: Yes 00:06:14.554 00:06:14.554 Running for 1 seconds... 00:06:14.554 00:06:14.554 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.554 ------------------------------------------------------------------------------------ 00:06:14.554 0,0 311584/s 1217 MiB/s 0 0 00:06:14.554 ==================================================================================== 00:06:14.554 Total 311584/s 1217 MiB/s 0 0' 00:06:14.554 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.554 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.554 09:44:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.554 09:44:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.554 09:44:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.554 09:44:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.554 09:44:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.554 09:44:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.554 09:44:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.554 09:44:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.554 09:44:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.554 09:44:03 -- accel/accel.sh@42 -- # jq -r . 00:06:14.554 [2024-12-15 09:44:03.186419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.554 [2024-12-15 09:44:03.186520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58819 ] 00:06:14.554 [2024-12-15 09:44:03.332926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.554 [2024-12-15 09:44:03.470511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=0x1 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=0 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=software 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=32 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=32 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=1 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val=Yes 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:14.813 09:44:03 -- accel/accel.sh@21 -- # val= 00:06:14.813 09:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:14.813 09:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@21 -- # val= 00:06:16.280 09:44:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # IFS=: 00:06:16.280 09:44:05 -- accel/accel.sh@20 -- # read -r var val 00:06:16.280 09:44:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.280 09:44:05 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:16.280 09:44:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.280 00:06:16.280 real 0m3.788s 00:06:16.280 user 0m3.363s 00:06:16.280 sys 0m0.222s 00:06:16.280 09:44:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.280 09:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.280 ************************************ 00:06:16.280 END TEST accel_copy_crc32c 00:06:16.280 ************************************ 00:06:16.280 09:44:05 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.280 09:44:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.280 09:44:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.280 09:44:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.280 ************************************ 00:06:16.280 START TEST accel_copy_crc32c_C2 00:06:16.280 ************************************ 00:06:16.280 09:44:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:16.280 09:44:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.280 09:44:05 -- accel/accel.sh@17 -- # local accel_module 00:06:16.280 09:44:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:16.280 09:44:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:16.280 09:44:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.280 09:44:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.280 09:44:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.280 09:44:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.280 09:44:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.280 09:44:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.280 09:44:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.280 09:44:05 -- accel/accel.sh@42 -- # jq -r . 00:06:16.280 [2024-12-15 09:44:05.109103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.280 [2024-12-15 09:44:05.109182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:06:16.280 [2024-12-15 09:44:05.248701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.537 [2024-12-15 09:44:05.386987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.437 09:44:06 -- accel/accel.sh@18 -- # out=' 00:06:18.437 SPDK Configuration: 00:06:18.437 Core mask: 0x1 00:06:18.437 00:06:18.437 Accel Perf Configuration: 00:06:18.437 Workload Type: copy_crc32c 00:06:18.437 CRC-32C seed: 0 00:06:18.437 Vector size: 4096 bytes 00:06:18.437 Transfer size: 8192 bytes 00:06:18.437 Vector count 2 00:06:18.437 Module: software 00:06:18.437 Queue depth: 32 00:06:18.437 Allocate depth: 32 00:06:18.437 # threads/core: 1 00:06:18.437 Run time: 1 seconds 00:06:18.437 Verify: Yes 00:06:18.437 00:06:18.437 Running for 1 seconds... 00:06:18.437 00:06:18.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.437 ------------------------------------------------------------------------------------ 00:06:18.437 0,0 232832/s 1819 MiB/s 0 0 00:06:18.437 ==================================================================================== 00:06:18.437 Total 232832/s 909 MiB/s 0 0' 00:06:18.437 09:44:06 -- accel/accel.sh@20 -- # IFS=: 00:06:18.437 09:44:06 -- accel/accel.sh@20 -- # read -r var val 00:06:18.437 09:44:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.437 09:44:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.437 09:44:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.437 09:44:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.437 09:44:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.437 09:44:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.437 09:44:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.437 09:44:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.437 09:44:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.437 09:44:06 -- accel/accel.sh@42 -- # jq -r . 00:06:18.437 [2024-12-15 09:44:06.996702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.437 [2024-12-15 09:44:06.996803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:06:18.437 [2024-12-15 09:44:07.145853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.437 [2024-12-15 09:44:07.315804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val=0x1 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.694 09:44:07 -- accel/accel.sh@21 -- # val=0 00:06:18.694 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.694 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val=software 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val=32 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val=32 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val=1 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val=Yes 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:18.695 09:44:07 -- accel/accel.sh@21 -- # val= 00:06:18.695 09:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # IFS=: 00:06:18.695 09:44:07 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@21 -- # val= 00:06:20.066 09:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # IFS=: 00:06:20.066 09:44:08 -- accel/accel.sh@20 -- # read -r var val 00:06:20.066 09:44:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.066 09:44:08 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:20.066 09:44:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.066 00:06:20.066 real 0m3.848s 00:06:20.066 user 0m3.436s 00:06:20.066 sys 0m0.206s 00:06:20.066 09:44:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.066 09:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:20.066 ************************************ 00:06:20.066 END TEST accel_copy_crc32c_C2 00:06:20.066 ************************************ 00:06:20.066 09:44:08 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:20.067 09:44:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.067 09:44:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.067 09:44:08 -- common/autotest_common.sh@10 -- # set +x 00:06:20.067 ************************************ 00:06:20.067 START TEST accel_dualcast 00:06:20.067 ************************************ 00:06:20.067 09:44:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:20.067 09:44:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.067 09:44:08 -- accel/accel.sh@17 -- # local accel_module 00:06:20.067 09:44:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:20.067 09:44:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:20.067 09:44:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.067 09:44:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.067 09:44:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.067 09:44:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.067 09:44:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.067 09:44:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.067 09:44:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.067 09:44:08 -- accel/accel.sh@42 -- # jq -r . 00:06:20.067 [2024-12-15 09:44:08.995546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.067 [2024-12-15 09:44:08.995646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ] 00:06:20.325 [2024-12-15 09:44:09.143394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.325 [2024-12-15 09:44:09.280635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.223 09:44:10 -- accel/accel.sh@18 -- # out=' 00:06:22.223 SPDK Configuration: 00:06:22.223 Core mask: 0x1 00:06:22.223 00:06:22.223 Accel Perf Configuration: 00:06:22.223 Workload Type: dualcast 00:06:22.223 Transfer size: 4096 bytes 00:06:22.223 Vector count 1 00:06:22.223 Module: software 00:06:22.223 Queue depth: 32 00:06:22.223 Allocate depth: 32 00:06:22.223 # threads/core: 1 00:06:22.223 Run time: 1 seconds 00:06:22.223 Verify: Yes 00:06:22.223 00:06:22.223 Running for 1 seconds... 00:06:22.223 00:06:22.223 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.223 ------------------------------------------------------------------------------------ 00:06:22.223 0,0 440032/s 1718 MiB/s 0 0 00:06:22.223 ==================================================================================== 00:06:22.223 Total 440032/s 1718 MiB/s 0 0' 00:06:22.223 09:44:10 -- accel/accel.sh@20 -- # IFS=: 00:06:22.223 09:44:10 -- accel/accel.sh@20 -- # read -r var val 00:06:22.223 09:44:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:22.223 09:44:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:22.223 09:44:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.223 09:44:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.223 09:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.223 09:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.223 09:44:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.223 09:44:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.223 09:44:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.224 09:44:10 -- accel/accel.sh@42 -- # jq -r . 00:06:22.224 [2024-12-15 09:44:10.886412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.224 [2024-12-15 09:44:10.886616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58949 ] 00:06:22.224 [2024-12-15 09:44:11.034325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.224 [2024-12-15 09:44:11.203053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=0x1 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=dualcast 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=software 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=32 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=32 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=1 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val=Yes 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:22.482 09:44:11 -- accel/accel.sh@21 -- # val= 00:06:22.482 09:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # IFS=: 00:06:22.482 09:44:11 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@21 -- # val= 00:06:23.858 09:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # IFS=: 00:06:23.858 09:44:12 -- accel/accel.sh@20 -- # read -r var val 00:06:23.858 09:44:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.858 09:44:12 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:23.858 09:44:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.858 00:06:23.858 real 0m3.843s 00:06:23.858 user 0m3.405s 00:06:23.858 sys 0m0.235s 00:06:23.858 09:44:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.858 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.858 ************************************ 00:06:23.858 END TEST accel_dualcast 00:06:23.858 ************************************ 00:06:23.858 09:44:12 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:23.858 09:44:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:23.858 09:44:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.858 09:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.858 ************************************ 00:06:23.858 START TEST accel_compare 00:06:23.858 ************************************ 00:06:23.858 09:44:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:23.858 09:44:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.858 09:44:12 -- accel/accel.sh@17 -- # local accel_module 00:06:23.858 09:44:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:23.858 09:44:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:23.858 09:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.858 09:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.858 09:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.858 09:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.858 09:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.858 09:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.858 09:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.858 09:44:12 -- accel/accel.sh@42 -- # jq -r . 00:06:23.858 [2024-12-15 09:44:12.872874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.116 [2024-12-15 09:44:12.873069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:06:24.116 [2024-12-15 09:44:13.022006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.374 [2024-12-15 09:44:13.193408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.275 09:44:14 -- accel/accel.sh@18 -- # out=' 00:06:26.275 SPDK Configuration: 00:06:26.275 Core mask: 0x1 00:06:26.275 00:06:26.275 Accel Perf Configuration: 00:06:26.275 Workload Type: compare 00:06:26.275 Transfer size: 4096 bytes 00:06:26.275 Vector count 1 00:06:26.275 Module: software 00:06:26.275 Queue depth: 32 00:06:26.275 Allocate depth: 32 00:06:26.275 # threads/core: 1 00:06:26.275 Run time: 1 seconds 00:06:26.275 Verify: Yes 00:06:26.275 00:06:26.275 Running for 1 seconds... 00:06:26.275 00:06:26.275 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.275 ------------------------------------------------------------------------------------ 00:06:26.275 0,0 479648/s 1873 MiB/s 0 0 00:06:26.275 ==================================================================================== 00:06:26.275 Total 479648/s 1873 MiB/s 0 0' 00:06:26.275 09:44:14 -- accel/accel.sh@20 -- # IFS=: 00:06:26.275 09:44:14 -- accel/accel.sh@20 -- # read -r var val 00:06:26.275 09:44:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:26.275 09:44:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.275 09:44:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.275 09:44:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.275 09:44:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.275 09:44:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.275 09:44:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.275 09:44:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.275 09:44:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.275 09:44:14 -- accel/accel.sh@42 -- # jq -r . 00:06:26.275 [2024-12-15 09:44:14.845995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.275 [2024-12-15 09:44:14.846096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:06:26.275 [2024-12-15 09:44:14.985711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.275 [2024-12-15 09:44:15.154497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.533 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.533 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.533 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.533 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.533 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.533 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.533 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=0x1 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=compare 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=software 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=32 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=32 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=1 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val=Yes 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:26.534 09:44:15 -- accel/accel.sh@21 -- # val= 00:06:26.534 09:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # IFS=: 00:06:26.534 09:44:15 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@21 -- # val= 00:06:27.907 09:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # IFS=: 00:06:27.907 09:44:16 -- accel/accel.sh@20 -- # read -r var val 00:06:27.907 09:44:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.907 09:44:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:27.907 09:44:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.907 00:06:27.907 real 0m3.923s 00:06:27.907 user 0m3.482s 00:06:27.907 sys 0m0.234s 00:06:27.907 09:44:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.907 09:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.907 ************************************ 00:06:27.907 END TEST accel_compare 00:06:27.907 ************************************ 00:06:27.907 09:44:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.907 09:44:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.907 09:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.907 09:44:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.907 ************************************ 00:06:27.907 START TEST accel_xor 00:06:27.907 ************************************ 00:06:27.907 09:44:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:27.907 09:44:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.907 09:44:16 -- accel/accel.sh@17 -- # local accel_module 00:06:27.907 09:44:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:27.907 09:44:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.907 09:44:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.907 09:44:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.907 09:44:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.907 09:44:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.907 09:44:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.907 09:44:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.907 09:44:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.907 09:44:16 -- accel/accel.sh@42 -- # jq -r . 00:06:27.907 [2024-12-15 09:44:16.832348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.907 [2024-12-15 09:44:16.832450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:06:28.166 [2024-12-15 09:44:16.978751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.166 [2024-12-15 09:44:17.116344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.066 09:44:18 -- accel/accel.sh@18 -- # out=' 00:06:30.066 SPDK Configuration: 00:06:30.066 Core mask: 0x1 00:06:30.066 00:06:30.066 Accel Perf Configuration: 00:06:30.066 Workload Type: xor 00:06:30.066 Source buffers: 2 00:06:30.066 Transfer size: 4096 bytes 00:06:30.066 Vector count 1 00:06:30.066 Module: software 00:06:30.066 Queue depth: 32 00:06:30.066 Allocate depth: 32 00:06:30.066 # threads/core: 1 00:06:30.066 Run time: 1 seconds 00:06:30.066 Verify: Yes 00:06:30.066 00:06:30.066 Running for 1 seconds... 00:06:30.066 00:06:30.066 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.066 ------------------------------------------------------------------------------------ 00:06:30.066 0,0 446016/s 1742 MiB/s 0 0 00:06:30.066 ==================================================================================== 00:06:30.066 Total 446016/s 1742 MiB/s 0 0' 00:06:30.066 09:44:18 -- accel/accel.sh@20 -- # IFS=: 00:06:30.066 09:44:18 -- accel/accel.sh@20 -- # read -r var val 00:06:30.066 09:44:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:30.066 09:44:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:30.066 09:44:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.066 09:44:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.066 09:44:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.066 09:44:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.066 09:44:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.066 09:44:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.066 09:44:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.066 09:44:18 -- accel/accel.sh@42 -- # jq -r . 00:06:30.066 [2024-12-15 09:44:18.726723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.067 [2024-12-15 09:44:18.726826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:06:30.067 [2024-12-15 09:44:18.872350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.067 [2024-12-15 09:44:19.008696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=0x1 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=xor 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=2 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=software 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=32 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=32 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=1 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val=Yes 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:30.325 09:44:19 -- accel/accel.sh@21 -- # val= 00:06:30.325 09:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # IFS=: 00:06:30.325 09:44:19 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@21 -- # val= 00:06:31.735 09:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # IFS=: 00:06:31.735 09:44:20 -- accel/accel.sh@20 -- # read -r var val 00:06:31.735 09:44:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.735 09:44:20 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:31.735 09:44:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.735 00:06:31.735 real 0m3.823s 00:06:31.735 user 0m3.395s 00:06:31.735 sys 0m0.224s 00:06:31.735 09:44:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.735 ************************************ 00:06:31.735 END TEST accel_xor 00:06:31.735 ************************************ 00:06:31.735 09:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.735 09:44:20 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:31.735 09:44:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:31.735 09:44:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.735 09:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.735 ************************************ 00:06:31.735 START TEST accel_xor 00:06:31.735 ************************************ 00:06:31.735 09:44:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:31.735 09:44:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.735 09:44:20 -- accel/accel.sh@17 -- # local accel_module 00:06:31.735 09:44:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.735 09:44:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.735 09:44:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.736 09:44:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.736 09:44:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.736 09:44:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.736 09:44:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.736 09:44:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.736 09:44:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.736 09:44:20 -- accel/accel.sh@42 -- # jq -r . 00:06:31.736 [2024-12-15 09:44:20.716841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.736 [2024-12-15 09:44:20.716954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59122 ] 00:06:31.994 [2024-12-15 09:44:20.864600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.252 [2024-12-15 09:44:21.016964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.626 09:44:22 -- accel/accel.sh@18 -- # out=' 00:06:33.626 SPDK Configuration: 00:06:33.626 Core mask: 0x1 00:06:33.626 00:06:33.626 Accel Perf Configuration: 00:06:33.626 Workload Type: xor 00:06:33.626 Source buffers: 3 00:06:33.626 Transfer size: 4096 bytes 00:06:33.626 Vector count 1 00:06:33.626 Module: software 00:06:33.626 Queue depth: 32 00:06:33.626 Allocate depth: 32 00:06:33.626 # threads/core: 1 00:06:33.626 Run time: 1 seconds 00:06:33.626 Verify: Yes 00:06:33.626 00:06:33.626 Running for 1 seconds... 00:06:33.626 00:06:33.626 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.626 ------------------------------------------------------------------------------------ 00:06:33.626 0,0 402880/s 1573 MiB/s 0 0 00:06:33.626 ==================================================================================== 00:06:33.626 Total 402880/s 1573 MiB/s 0 0' 00:06:33.626 09:44:22 -- accel/accel.sh@20 -- # IFS=: 00:06:33.626 09:44:22 -- accel/accel.sh@20 -- # read -r var val 00:06:33.626 09:44:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:33.626 09:44:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:33.626 09:44:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.626 09:44:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.626 09:44:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.626 09:44:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.626 09:44:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.626 09:44:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.626 09:44:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.626 09:44:22 -- accel/accel.sh@42 -- # jq -r . 00:06:33.884 [2024-12-15 09:44:22.647246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.884 [2024-12-15 09:44:22.647360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59148 ] 00:06:33.884 [2024-12-15 09:44:22.794638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.142 [2024-12-15 09:44:22.946993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=0x1 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=xor 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=3 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=software 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=32 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=32 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=1 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val=Yes 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:34.142 09:44:23 -- accel/accel.sh@21 -- # val= 00:06:34.142 09:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # IFS=: 00:06:34.142 09:44:23 -- accel/accel.sh@20 -- # read -r var val 00:06:35.516 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.516 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.516 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.516 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.516 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.775 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.775 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.775 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.775 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.775 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.775 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.775 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.775 09:44:24 -- accel/accel.sh@21 -- # val= 00:06:35.775 09:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # IFS=: 00:06:35.775 09:44:24 -- accel/accel.sh@20 -- # read -r var val 00:06:35.775 09:44:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.775 ************************************ 00:06:35.775 END TEST accel_xor 00:06:35.775 ************************************ 00:06:35.775 09:44:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:35.775 09:44:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.775 00:06:35.775 real 0m3.862s 00:06:35.775 user 0m3.421s 00:06:35.775 sys 0m0.233s 00:06:35.775 09:44:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.775 09:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.775 09:44:24 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:35.775 09:44:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:35.775 09:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.775 09:44:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.775 ************************************ 00:06:35.775 START TEST accel_dif_verify 00:06:35.775 ************************************ 00:06:35.775 09:44:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:35.775 09:44:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.775 09:44:24 -- accel/accel.sh@17 -- # local accel_module 00:06:35.775 09:44:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:35.775 09:44:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:35.775 09:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.775 09:44:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.775 09:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.775 09:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.775 09:44:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.775 09:44:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.775 09:44:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.775 09:44:24 -- accel/accel.sh@42 -- # jq -r . 00:06:35.775 [2024-12-15 09:44:24.611397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.775 [2024-12-15 09:44:24.611494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:06:35.775 [2024-12-15 09:44:24.755421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.033 [2024-12-15 09:44:24.937343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.946 09:44:26 -- accel/accel.sh@18 -- # out=' 00:06:37.946 SPDK Configuration: 00:06:37.946 Core mask: 0x1 00:06:37.946 00:06:37.946 Accel Perf Configuration: 00:06:37.946 Workload Type: dif_verify 00:06:37.946 Vector size: 4096 bytes 00:06:37.946 Transfer size: 4096 bytes 00:06:37.946 Block size: 512 bytes 00:06:37.946 Metadata size: 8 bytes 00:06:37.946 Vector count 1 00:06:37.946 Module: software 00:06:37.946 Queue depth: 32 00:06:37.946 Allocate depth: 32 00:06:37.946 # threads/core: 1 00:06:37.946 Run time: 1 seconds 00:06:37.946 Verify: No 00:06:37.946 00:06:37.946 Running for 1 seconds... 00:06:37.946 00:06:37.946 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.946 ------------------------------------------------------------------------------------ 00:06:37.946 0,0 98208/s 389 MiB/s 0 0 00:06:37.946 ==================================================================================== 00:06:37.946 Total 98208/s 383 MiB/s 0 0' 00:06:37.946 09:44:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:37.946 09:44:26 -- accel/accel.sh@20 -- # IFS=: 00:06:37.946 09:44:26 -- accel/accel.sh@20 -- # read -r var val 00:06:37.946 09:44:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:37.946 09:44:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.946 09:44:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.946 09:44:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.946 09:44:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.946 09:44:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.946 09:44:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.946 09:44:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.946 09:44:26 -- accel/accel.sh@42 -- # jq -r . 00:06:37.946 [2024-12-15 09:44:26.711044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.946 [2024-12-15 09:44:26.711148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:06:37.946 [2024-12-15 09:44:26.861116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.208 [2024-12-15 09:44:27.038610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=0x1 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=dif_verify 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=software 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=32 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=32 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=1 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val=No 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:38.208 09:44:27 -- accel/accel.sh@21 -- # val= 00:06:38.208 09:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # IFS=: 00:06:38.208 09:44:27 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 09:44:28 -- accel/accel.sh@21 -- # val= 00:06:40.113 09:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # IFS=: 00:06:40.113 09:44:28 -- accel/accel.sh@20 -- # read -r var val 00:06:40.113 ************************************ 00:06:40.113 END TEST accel_dif_verify 00:06:40.113 ************************************ 00:06:40.113 09:44:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.113 09:44:28 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:40.113 09:44:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.113 00:06:40.113 real 0m4.191s 00:06:40.113 user 0m3.745s 00:06:40.113 sys 0m0.240s 00:06:40.113 09:44:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.113 09:44:28 -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 09:44:28 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:40.113 09:44:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:40.113 09:44:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.113 09:44:28 -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 ************************************ 00:06:40.113 START TEST accel_dif_generate 00:06:40.113 ************************************ 00:06:40.113 09:44:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:40.113 09:44:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.114 09:44:28 -- accel/accel.sh@17 -- # local accel_module 00:06:40.114 09:44:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:40.114 09:44:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.114 09:44:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.114 09:44:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.114 09:44:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.114 09:44:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.114 09:44:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.114 09:44:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.114 09:44:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.114 09:44:28 -- accel/accel.sh@42 -- # jq -r . 00:06:40.114 [2024-12-15 09:44:28.844673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.114 [2024-12-15 09:44:28.844776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:06:40.114 [2024-12-15 09:44:28.993425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.373 [2024-12-15 09:44:29.195493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.292 09:44:30 -- accel/accel.sh@18 -- # out=' 00:06:42.292 SPDK Configuration: 00:06:42.292 Core mask: 0x1 00:06:42.292 00:06:42.292 Accel Perf Configuration: 00:06:42.292 Workload Type: dif_generate 00:06:42.292 Vector size: 4096 bytes 00:06:42.292 Transfer size: 4096 bytes 00:06:42.292 Block size: 512 bytes 00:06:42.292 Metadata size: 8 bytes 00:06:42.292 Vector count 1 00:06:42.292 Module: software 00:06:42.292 Queue depth: 32 00:06:42.292 Allocate depth: 32 00:06:42.292 # threads/core: 1 00:06:42.292 Run time: 1 seconds 00:06:42.292 Verify: No 00:06:42.292 00:06:42.292 Running for 1 seconds... 00:06:42.292 00:06:42.292 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.292 ------------------------------------------------------------------------------------ 00:06:42.292 0,0 118336/s 469 MiB/s 0 0 00:06:42.292 ==================================================================================== 00:06:42.292 Total 118336/s 462 MiB/s 0 0' 00:06:42.292 09:44:30 -- accel/accel.sh@20 -- # IFS=: 00:06:42.292 09:44:30 -- accel/accel.sh@20 -- # read -r var val 00:06:42.292 09:44:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:42.292 09:44:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.292 09:44:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.292 09:44:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.292 09:44:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.292 09:44:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.292 09:44:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.292 09:44:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.292 09:44:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.292 09:44:30 -- accel/accel.sh@42 -- # jq -r . 00:06:42.292 [2024-12-15 09:44:30.971201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.293 [2024-12-15 09:44:30.971318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:06:42.293 [2024-12-15 09:44:31.121147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.293 [2024-12-15 09:44:31.300454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=0x1 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=dif_generate 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=software 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=32 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=32 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=1 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val=No 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:42.554 09:44:31 -- accel/accel.sh@21 -- # val= 00:06:42.554 09:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # IFS=: 00:06:42.554 09:44:31 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@21 -- # val= 00:06:44.466 09:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # IFS=: 00:06:44.466 09:44:33 -- accel/accel.sh@20 -- # read -r var val 00:06:44.466 09:44:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.466 09:44:33 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:44.466 09:44:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.466 00:06:44.466 real 0m4.246s 00:06:44.466 user 0m3.791s 00:06:44.466 sys 0m0.251s 00:06:44.466 09:44:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.466 09:44:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.466 ************************************ 00:06:44.466 END TEST accel_dif_generate 00:06:44.466 ************************************ 00:06:44.466 09:44:33 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:44.466 09:44:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:44.466 09:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.466 09:44:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.466 ************************************ 00:06:44.466 START TEST accel_dif_generate_copy 00:06:44.466 ************************************ 00:06:44.466 09:44:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:44.466 09:44:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.466 09:44:33 -- accel/accel.sh@17 -- # local accel_module 00:06:44.466 09:44:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:44.466 09:44:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:44.466 09:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.466 09:44:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.466 09:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.467 09:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.467 09:44:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.467 09:44:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.467 09:44:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.467 09:44:33 -- accel/accel.sh@42 -- # jq -r . 00:06:44.467 [2024-12-15 09:44:33.154757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.467 [2024-12-15 09:44:33.154856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:06:44.467 [2024-12-15 09:44:33.305286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.467 [2024-12-15 09:44:33.480066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.435 09:44:35 -- accel/accel.sh@18 -- # out=' 00:06:46.435 SPDK Configuration: 00:06:46.435 Core mask: 0x1 00:06:46.435 00:06:46.435 Accel Perf Configuration: 00:06:46.435 Workload Type: dif_generate_copy 00:06:46.435 Vector size: 4096 bytes 00:06:46.435 Transfer size: 4096 bytes 00:06:46.435 Vector count 1 00:06:46.435 Module: software 00:06:46.435 Queue depth: 32 00:06:46.435 Allocate depth: 32 00:06:46.435 # threads/core: 1 00:06:46.435 Run time: 1 seconds 00:06:46.435 Verify: No 00:06:46.435 00:06:46.435 Running for 1 seconds... 00:06:46.435 00:06:46.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.435 ------------------------------------------------------------------------------------ 00:06:46.435 0,0 90432/s 358 MiB/s 0 0 00:06:46.435 ==================================================================================== 00:06:46.435 Total 90432/s 353 MiB/s 0 0' 00:06:46.435 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.435 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.435 09:44:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.435 09:44:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.435 09:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.435 09:44:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.435 09:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.435 09:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.435 09:44:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.435 09:44:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.435 09:44:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.435 09:44:35 -- accel/accel.sh@42 -- # jq -r . 00:06:46.435 [2024-12-15 09:44:35.269685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.435 [2024-12-15 09:44:35.269790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59349 ] 00:06:46.435 [2024-12-15 09:44:35.418149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.694 [2024-12-15 09:44:35.595832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val=0x1 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.951 09:44:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.951 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.951 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val=software 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val=32 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val=32 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val=1 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val=No 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:46.952 09:44:35 -- accel/accel.sh@21 -- # val= 00:06:46.952 09:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # IFS=: 00:06:46.952 09:44:35 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@21 -- # val= 00:06:48.328 09:44:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # IFS=: 00:06:48.328 09:44:37 -- accel/accel.sh@20 -- # read -r var val 00:06:48.328 09:44:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.328 09:44:37 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:48.328 09:44:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.328 00:06:48.328 real 0m4.074s 00:06:48.328 user 0m3.623s 00:06:48.328 sys 0m0.246s 00:06:48.328 09:44:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.328 ************************************ 00:06:48.328 END TEST accel_dif_generate_copy 00:06:48.328 ************************************ 00:06:48.328 09:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:48.328 09:44:37 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:48.328 09:44:37 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.328 09:44:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:48.328 09:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.328 09:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:48.328 ************************************ 00:06:48.328 START TEST accel_comp 00:06:48.328 ************************************ 00:06:48.328 09:44:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.328 09:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.328 09:44:37 -- accel/accel.sh@17 -- # local accel_module 00:06:48.328 09:44:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.329 09:44:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.329 09:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.329 09:44:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.329 09:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.329 09:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.329 09:44:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.329 09:44:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.329 09:44:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.329 09:44:37 -- accel/accel.sh@42 -- # jq -r . 00:06:48.329 [2024-12-15 09:44:37.280892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.329 [2024-12-15 09:44:37.280999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:06:48.588 [2024-12-15 09:44:37.427867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.588 [2024-12-15 09:44:37.573071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.487 09:44:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.487 00:06:50.488 SPDK Configuration: 00:06:50.488 Core mask: 0x1 00:06:50.488 00:06:50.488 Accel Perf Configuration: 00:06:50.488 Workload Type: compress 00:06:50.488 Transfer size: 4096 bytes 00:06:50.488 Vector count 1 00:06:50.488 Module: software 00:06:50.488 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.488 Queue depth: 32 00:06:50.488 Allocate depth: 32 00:06:50.488 # threads/core: 1 00:06:50.488 Run time: 1 seconds 00:06:50.488 Verify: No 00:06:50.488 00:06:50.488 Running for 1 seconds... 00:06:50.488 00:06:50.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.488 ------------------------------------------------------------------------------------ 00:06:50.488 0,0 64128/s 267 MiB/s 0 0 00:06:50.488 ==================================================================================== 00:06:50.488 Total 64128/s 250 MiB/s 0 0' 00:06:50.488 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.488 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.488 09:44:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.488 09:44:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.488 09:44:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.488 09:44:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.488 09:44:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.488 09:44:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.488 09:44:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.488 09:44:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.488 09:44:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.488 09:44:39 -- accel/accel.sh@42 -- # jq -r . 00:06:50.488 [2024-12-15 09:44:39.191396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.488 [2024-12-15 09:44:39.191500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59418 ] 00:06:50.488 [2024-12-15 09:44:39.339997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.488 [2024-12-15 09:44:39.485813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=0x1 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=compress 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=software 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=32 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=32 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=1 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val=No 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:50.747 09:44:39 -- accel/accel.sh@21 -- # val= 00:06:50.747 09:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # IFS=: 00:06:50.747 09:44:39 -- accel/accel.sh@20 -- # read -r var val 00:06:52.122 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.122 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.122 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.122 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.122 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.122 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.122 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.122 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.122 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.122 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.123 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.123 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.123 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.123 09:44:41 -- accel/accel.sh@21 -- # val= 00:06:52.123 09:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.123 09:44:41 -- accel/accel.sh@20 -- # IFS=: 00:06:52.123 09:44:41 -- accel/accel.sh@20 -- # read -r var val 00:06:52.123 09:44:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.123 09:44:41 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:52.123 09:44:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.123 00:06:52.123 real 0m3.818s 00:06:52.123 user 0m3.386s 00:06:52.123 sys 0m0.231s 00:06:52.123 09:44:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.123 09:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 ************************************ 00:06:52.123 END TEST accel_comp 00:06:52.123 ************************************ 00:06:52.123 09:44:41 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.123 09:44:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.123 09:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.123 09:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 ************************************ 00:06:52.123 START TEST accel_decomp 00:06:52.123 ************************************ 00:06:52.123 09:44:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.123 09:44:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.123 09:44:41 -- accel/accel.sh@17 -- # local accel_module 00:06:52.123 09:44:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.123 09:44:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.123 09:44:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.123 09:44:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.123 09:44:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.123 09:44:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.123 09:44:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.123 09:44:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.123 09:44:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.123 09:44:41 -- accel/accel.sh@42 -- # jq -r . 00:06:52.123 [2024-12-15 09:44:41.130975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.123 [2024-12-15 09:44:41.131083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59459 ] 00:06:52.382 [2024-12-15 09:44:41.276061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.640 [2024-12-15 09:44:41.419532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.017 09:44:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.017 00:06:54.017 SPDK Configuration: 00:06:54.017 Core mask: 0x1 00:06:54.017 00:06:54.017 Accel Perf Configuration: 00:06:54.018 Workload Type: decompress 00:06:54.018 Transfer size: 4096 bytes 00:06:54.018 Vector count 1 00:06:54.018 Module: software 00:06:54.018 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.018 Queue depth: 32 00:06:54.018 Allocate depth: 32 00:06:54.018 # threads/core: 1 00:06:54.018 Run time: 1 seconds 00:06:54.018 Verify: Yes 00:06:54.018 00:06:54.018 Running for 1 seconds... 00:06:54.018 00:06:54.018 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.018 ------------------------------------------------------------------------------------ 00:06:54.018 0,0 76352/s 140 MiB/s 0 0 00:06:54.018 ==================================================================================== 00:06:54.018 Total 76352/s 298 MiB/s 0 0' 00:06:54.018 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.018 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.018 09:44:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.018 09:44:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.018 09:44:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.018 09:44:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.018 09:44:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.018 09:44:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.018 09:44:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.018 09:44:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.018 09:44:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.018 09:44:43 -- accel/accel.sh@42 -- # jq -r . 00:06:54.279 [2024-12-15 09:44:43.055128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.279 [2024-12-15 09:44:43.055229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59480 ] 00:06:54.279 [2024-12-15 09:44:43.208387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.540 [2024-12-15 09:44:43.418615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.800 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.800 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.800 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.800 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.800 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.800 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.800 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=0x1 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=decompress 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=software 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=32 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=32 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=1 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val=Yes 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:54.801 09:44:43 -- accel/accel.sh@21 -- # val= 00:06:54.801 09:44:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # IFS=: 00:06:54.801 09:44:43 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@21 -- # val= 00:06:56.186 09:44:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # IFS=: 00:06:56.186 09:44:45 -- accel/accel.sh@20 -- # read -r var val 00:06:56.186 09:44:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.186 09:44:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.186 09:44:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.186 00:06:56.186 real 0m4.071s 00:06:56.186 user 0m3.593s 00:06:56.186 sys 0m0.269s 00:06:56.186 ************************************ 00:06:56.186 END TEST accel_decomp 00:06:56.186 ************************************ 00:06:56.186 09:44:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.186 09:44:45 -- common/autotest_common.sh@10 -- # set +x 00:06:56.447 09:44:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.447 09:44:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:56.447 09:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.447 09:44:45 -- common/autotest_common.sh@10 -- # set +x 00:06:56.447 ************************************ 00:06:56.447 START TEST accel_decmop_full 00:06:56.447 ************************************ 00:06:56.447 09:44:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.447 09:44:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.447 09:44:45 -- accel/accel.sh@17 -- # local accel_module 00:06:56.447 09:44:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.447 09:44:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:56.447 09:44:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.447 09:44:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.447 09:44:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.447 09:44:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.447 09:44:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.447 09:44:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.447 09:44:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.447 09:44:45 -- accel/accel.sh@42 -- # jq -r . 00:06:56.447 [2024-12-15 09:44:45.265762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.447 [2024-12-15 09:44:45.265874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59525 ] 00:06:56.447 [2024-12-15 09:44:45.415311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.706 [2024-12-15 09:44:45.598839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.618 09:44:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.618 00:06:58.618 SPDK Configuration: 00:06:58.618 Core mask: 0x1 00:06:58.618 00:06:58.618 Accel Perf Configuration: 00:06:58.618 Workload Type: decompress 00:06:58.618 Transfer size: 111250 bytes 00:06:58.618 Vector count 1 00:06:58.618 Module: software 00:06:58.618 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.618 Queue depth: 32 00:06:58.618 Allocate depth: 32 00:06:58.618 # threads/core: 1 00:06:58.618 Run time: 1 seconds 00:06:58.618 Verify: Yes 00:06:58.618 00:06:58.618 Running for 1 seconds... 00:06:58.618 00:06:58.618 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.618 ------------------------------------------------------------------------------------ 00:06:58.618 0,0 4352/s 179 MiB/s 0 0 00:06:58.618 ==================================================================================== 00:06:58.618 Total 4352/s 461 MiB/s 0 0' 00:06:58.618 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.618 09:44:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.618 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.618 09:44:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.618 09:44:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.618 09:44:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.618 09:44:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.618 09:44:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.618 09:44:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.618 09:44:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.618 09:44:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.618 09:44:47 -- accel/accel.sh@42 -- # jq -r . 00:06:58.618 [2024-12-15 09:44:47.388940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.618 [2024-12-15 09:44:47.389349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:06:58.618 [2024-12-15 09:44:47.537728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.879 [2024-12-15 09:44:47.711712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=0x1 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=decompress 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=software 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=32 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=32 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=1 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val=Yes 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:06:58.879 09:44:47 -- accel/accel.sh@21 -- # val= 00:06:58.879 09:44:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # IFS=: 00:06:58.879 09:44:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@21 -- # val= 00:07:00.791 09:44:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # IFS=: 00:07:00.791 09:44:49 -- accel/accel.sh@20 -- # read -r var val 00:07:00.791 09:44:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.791 09:44:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.791 09:44:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.791 00:07:00.791 real 0m4.274s 00:07:00.791 user 0m3.803s 00:07:00.791 sys 0m0.260s 00:07:00.791 09:44:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.791 09:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:00.791 ************************************ 00:07:00.791 END TEST accel_decmop_full 00:07:00.791 ************************************ 00:07:00.791 09:44:49 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.791 09:44:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:00.791 09:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.791 09:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:00.791 ************************************ 00:07:00.791 START TEST accel_decomp_mcore 00:07:00.791 ************************************ 00:07:00.791 09:44:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.791 09:44:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.791 09:44:49 -- accel/accel.sh@17 -- # local accel_module 00:07:00.791 09:44:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.791 09:44:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.791 09:44:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.791 09:44:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.791 09:44:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.791 09:44:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.791 09:44:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.791 09:44:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.791 09:44:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.791 09:44:49 -- accel/accel.sh@42 -- # jq -r . 00:07:00.791 [2024-12-15 09:44:49.590147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.791 [2024-12-15 09:44:49.590292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59593 ] 00:07:00.791 [2024-12-15 09:44:49.741059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.054 [2024-12-15 09:44:49.979319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.054 [2024-12-15 09:44:49.979541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.054 [2024-12-15 09:44:49.979882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.054 [2024-12-15 09:44:49.979916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.968 09:44:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.968 00:07:02.968 SPDK Configuration: 00:07:02.968 Core mask: 0xf 00:07:02.968 00:07:02.968 Accel Perf Configuration: 00:07:02.968 Workload Type: decompress 00:07:02.968 Transfer size: 4096 bytes 00:07:02.968 Vector count 1 00:07:02.968 Module: software 00:07:02.968 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.968 Queue depth: 32 00:07:02.968 Allocate depth: 32 00:07:02.968 # threads/core: 1 00:07:02.968 Run time: 1 seconds 00:07:02.968 Verify: Yes 00:07:02.968 00:07:02.968 Running for 1 seconds... 00:07:02.968 00:07:02.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.968 ------------------------------------------------------------------------------------ 00:07:02.968 0,0 53472/s 98 MiB/s 0 0 00:07:02.968 3,0 54528/s 100 MiB/s 0 0 00:07:02.968 2,0 55296/s 101 MiB/s 0 0 00:07:02.968 1,0 54784/s 100 MiB/s 0 0 00:07:02.968 ==================================================================================== 00:07:02.968 Total 218080/s 851 MiB/s 0 0' 00:07:02.968 09:44:51 -- accel/accel.sh@20 -- # IFS=: 00:07:02.969 09:44:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.969 09:44:51 -- accel/accel.sh@20 -- # read -r var val 00:07:02.969 09:44:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.969 09:44:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.969 09:44:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.969 09:44:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.969 09:44:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.969 09:44:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.969 09:44:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.969 09:44:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.969 09:44:51 -- accel/accel.sh@42 -- # jq -r . 00:07:02.969 [2024-12-15 09:44:51.882099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.969 [2024-12-15 09:44:51.882202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:07:03.231 [2024-12-15 09:44:52.024169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.231 [2024-12-15 09:44:52.205685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.231 [2024-12-15 09:44:52.205973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.231 [2024-12-15 09:44:52.206248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.231 [2024-12-15 09:44:52.206275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=0xf 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=decompress 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=software 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=32 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=32 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=1 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val=Yes 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:03.493 09:44:52 -- accel/accel.sh@21 -- # val= 00:07:03.493 09:44:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # IFS=: 00:07:03.493 09:44:52 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@21 -- # val= 00:07:04.909 09:44:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # IFS=: 00:07:04.909 09:44:53 -- accel/accel.sh@20 -- # read -r var val 00:07:04.909 09:44:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.909 09:44:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.909 09:44:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.909 00:07:04.909 real 0m4.302s 00:07:04.909 user 0m12.753s 00:07:04.909 sys 0m0.334s 00:07:04.909 09:44:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.909 ************************************ 00:07:04.909 END TEST accel_decomp_mcore 00:07:04.909 09:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.909 ************************************ 00:07:04.909 09:44:53 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.909 09:44:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.909 09:44:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.909 09:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:04.909 ************************************ 00:07:04.909 START TEST accel_decomp_full_mcore 00:07:04.909 ************************************ 00:07:04.909 09:44:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.909 09:44:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.909 09:44:53 -- accel/accel.sh@17 -- # local accel_module 00:07:04.909 09:44:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.909 09:44:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.909 09:44:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.909 09:44:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.909 09:44:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.909 09:44:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.909 09:44:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.909 09:44:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.909 09:44:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.909 09:44:53 -- accel/accel.sh@42 -- # jq -r . 00:07:05.167 [2024-12-15 09:44:53.934896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.167 [2024-12-15 09:44:53.934977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59672 ] 00:07:05.167 [2024-12-15 09:44:54.075859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.425 [2024-12-15 09:44:54.219002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.425 [2024-12-15 09:44:54.219295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.425 [2024-12-15 09:44:54.219456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.425 [2024-12-15 09:44:54.219474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.323 09:44:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.323 00:07:07.323 SPDK Configuration: 00:07:07.323 Core mask: 0xf 00:07:07.323 00:07:07.323 Accel Perf Configuration: 00:07:07.323 Workload Type: decompress 00:07:07.323 Transfer size: 111250 bytes 00:07:07.323 Vector count 1 00:07:07.323 Module: software 00:07:07.323 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.323 Queue depth: 32 00:07:07.323 Allocate depth: 32 00:07:07.323 # threads/core: 1 00:07:07.323 Run time: 1 seconds 00:07:07.323 Verify: Yes 00:07:07.323 00:07:07.323 Running for 1 seconds... 00:07:07.323 00:07:07.323 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.323 ------------------------------------------------------------------------------------ 00:07:07.323 0,0 5504/s 227 MiB/s 0 0 00:07:07.323 3,0 4288/s 177 MiB/s 0 0 00:07:07.323 2,0 4320/s 178 MiB/s 0 0 00:07:07.323 1,0 4320/s 178 MiB/s 0 0 00:07:07.323 ==================================================================================== 00:07:07.323 Total 18432/s 1955 MiB/s 0 0' 00:07:07.323 09:44:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.323 09:44:55 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:55 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:07.323 09:44:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.323 09:44:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.323 09:44:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.323 09:44:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.323 09:44:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.323 09:44:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.323 09:44:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.323 09:44:55 -- accel/accel.sh@42 -- # jq -r . 00:07:07.323 [2024-12-15 09:44:55.884760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.323 [2024-12-15 09:44:55.884863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59701 ] 00:07:07.323 [2024-12-15 09:44:56.032354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.323 [2024-12-15 09:44:56.179213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.323 [2024-12-15 09:44:56.179402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.323 [2024-12-15 09:44:56.179740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.323 [2024-12-15 09:44:56.179759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=0xf 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=decompress 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=software 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=32 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=32 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val=1 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.323 09:44:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.323 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.323 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.324 09:44:56 -- accel/accel.sh@21 -- # val=Yes 00:07:07.324 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.324 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.324 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:07.324 09:44:56 -- accel/accel.sh@21 -- # val= 00:07:07.324 09:44:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # IFS=: 00:07:07.324 09:44:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.224 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.224 09:44:57 -- accel/accel.sh@21 -- # val= 00:07:09.224 09:44:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.225 09:44:57 -- accel/accel.sh@20 -- # IFS=: 00:07:09.225 09:44:57 -- accel/accel.sh@20 -- # read -r var val 00:07:09.225 09:44:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.225 09:44:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:09.225 ************************************ 00:07:09.225 END TEST accel_decomp_full_mcore 00:07:09.225 ************************************ 00:07:09.225 09:44:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.225 00:07:09.225 real 0m3.902s 00:07:09.225 user 0m11.888s 00:07:09.225 sys 0m0.272s 00:07:09.225 09:44:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.225 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:09.225 09:44:57 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.225 09:44:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:09.225 09:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.225 09:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:09.225 ************************************ 00:07:09.225 START TEST accel_decomp_mthread 00:07:09.225 ************************************ 00:07:09.225 09:44:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.225 09:44:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.225 09:44:57 -- accel/accel.sh@17 -- # local accel_module 00:07:09.225 09:44:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.225 09:44:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:09.225 09:44:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.225 09:44:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.225 09:44:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.225 09:44:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.225 09:44:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.225 09:44:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.225 09:44:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.225 09:44:57 -- accel/accel.sh@42 -- # jq -r . 00:07:09.225 [2024-12-15 09:44:57.888522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.225 [2024-12-15 09:44:57.888596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:07:09.225 [2024-12-15 09:44:58.031499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.225 [2024-12-15 09:44:58.171652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.123 09:44:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.123 00:07:11.123 SPDK Configuration: 00:07:11.123 Core mask: 0x1 00:07:11.123 00:07:11.123 Accel Perf Configuration: 00:07:11.123 Workload Type: decompress 00:07:11.123 Transfer size: 4096 bytes 00:07:11.123 Vector count 1 00:07:11.123 Module: software 00:07:11.123 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.123 Queue depth: 32 00:07:11.123 Allocate depth: 32 00:07:11.123 # threads/core: 2 00:07:11.123 Run time: 1 seconds 00:07:11.123 Verify: Yes 00:07:11.123 00:07:11.123 Running for 1 seconds... 00:07:11.123 00:07:11.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.123 ------------------------------------------------------------------------------------ 00:07:11.123 0,1 40896/s 75 MiB/s 0 0 00:07:11.123 0,0 40800/s 75 MiB/s 0 0 00:07:11.123 ==================================================================================== 00:07:11.123 Total 81696/s 319 MiB/s 0 0' 00:07:11.123 09:44:59 -- accel/accel.sh@20 -- # IFS=: 00:07:11.123 09:44:59 -- accel/accel.sh@20 -- # read -r var val 00:07:11.123 09:44:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:11.123 09:44:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:11.123 09:44:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.123 09:44:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.123 09:44:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.123 09:44:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.123 09:44:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.123 09:44:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.123 09:44:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.123 09:44:59 -- accel/accel.sh@42 -- # jq -r . 00:07:11.123 [2024-12-15 09:44:59.797516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.123 [2024-12-15 09:44:59.797594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59760 ] 00:07:11.123 [2024-12-15 09:44:59.938505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.123 [2024-12-15 09:45:00.085330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.381 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.381 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.381 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.381 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=0x1 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=decompress 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=software 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=32 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=32 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=2 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val=Yes 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:11.382 09:45:00 -- accel/accel.sh@21 -- # val= 00:07:11.382 09:45:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # IFS=: 00:07:11.382 09:45:00 -- accel/accel.sh@20 -- # read -r var val 00:07:12.756 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@21 -- # val= 00:07:12.757 09:45:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # IFS=: 00:07:12.757 09:45:01 -- accel/accel.sh@20 -- # read -r var val 00:07:12.757 09:45:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.757 09:45:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.757 09:45:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.757 00:07:12.757 real 0m3.813s 00:07:12.757 user 0m3.380s 00:07:12.757 sys 0m0.228s 00:07:12.757 09:45:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.757 ************************************ 00:07:12.757 END TEST accel_decomp_mthread 00:07:12.757 ************************************ 00:07:12.757 09:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.757 09:45:01 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.757 09:45:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:12.757 09:45:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.757 09:45:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.757 ************************************ 00:07:12.757 START TEST accel_deomp_full_mthread 00:07:12.757 ************************************ 00:07:12.757 09:45:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.757 09:45:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.757 09:45:01 -- accel/accel.sh@17 -- # local accel_module 00:07:12.757 09:45:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.757 09:45:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:12.757 09:45:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.757 09:45:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.757 09:45:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.757 09:45:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.757 09:45:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.757 09:45:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.757 09:45:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.757 09:45:01 -- accel/accel.sh@42 -- # jq -r . 00:07:12.757 [2024-12-15 09:45:01.746977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.757 [2024-12-15 09:45:01.747188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59801 ] 00:07:13.015 [2024-12-15 09:45:01.893291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.276 [2024-12-15 09:45:02.069375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.191 09:45:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.191 00:07:15.191 SPDK Configuration: 00:07:15.191 Core mask: 0x1 00:07:15.191 00:07:15.191 Accel Perf Configuration: 00:07:15.191 Workload Type: decompress 00:07:15.191 Transfer size: 111250 bytes 00:07:15.191 Vector count 1 00:07:15.191 Module: software 00:07:15.191 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.191 Queue depth: 32 00:07:15.191 Allocate depth: 32 00:07:15.191 # threads/core: 2 00:07:15.191 Run time: 1 seconds 00:07:15.191 Verify: Yes 00:07:15.191 00:07:15.191 Running for 1 seconds... 00:07:15.191 00:07:15.191 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.191 ------------------------------------------------------------------------------------ 00:07:15.191 0,1 2208/s 91 MiB/s 0 0 00:07:15.191 0,0 2176/s 89 MiB/s 0 0 00:07:15.191 ==================================================================================== 00:07:15.191 Total 4384/s 465 MiB/s 0 0' 00:07:15.191 09:45:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.191 09:45:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.191 09:45:03 -- accel/accel.sh@20 -- # IFS=: 00:07:15.191 09:45:03 -- accel/accel.sh@20 -- # read -r var val 00:07:15.191 09:45:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.191 09:45:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.191 09:45:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.191 09:45:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.191 09:45:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.191 09:45:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.191 09:45:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.191 09:45:03 -- accel/accel.sh@42 -- # jq -r . 00:07:15.191 [2024-12-15 09:45:03.906759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.191 [2024-12-15 09:45:03.906981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59827 ] 00:07:15.191 [2024-12-15 09:45:04.053933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.452 [2024-12-15 09:45:04.241799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.452 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.452 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.452 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.452 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.452 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.452 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.452 09:45:04 -- accel/accel.sh@21 -- # val=0x1 00:07:15.452 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.452 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=decompress 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=software 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=32 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=32 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=2 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val=Yes 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:15.453 09:45:04 -- accel/accel.sh@21 -- # val= 00:07:15.453 09:45:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # IFS=: 00:07:15.453 09:45:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@21 -- # val= 00:07:17.365 09:45:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # IFS=: 00:07:17.365 09:45:06 -- accel/accel.sh@20 -- # read -r var val 00:07:17.365 09:45:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.365 09:45:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.365 09:45:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.365 00:07:17.365 real 0m4.320s 00:07:17.365 user 0m3.867s 00:07:17.365 sys 0m0.242s 00:07:17.365 09:45:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.365 ************************************ 00:07:17.365 END TEST accel_deomp_full_mthread 00:07:17.365 ************************************ 00:07:17.365 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.365 09:45:06 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:17.365 09:45:06 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.365 09:45:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:17.365 09:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.365 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:07:17.365 09:45:06 -- accel/accel.sh@129 -- # build_accel_config 00:07:17.365 09:45:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.365 09:45:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.365 09:45:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.365 09:45:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.365 09:45:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.365 09:45:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.365 09:45:06 -- accel/accel.sh@42 -- # jq -r . 00:07:17.365 ************************************ 00:07:17.365 START TEST accel_dif_functional_tests 00:07:17.365 ************************************ 00:07:17.365 09:45:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:17.365 [2024-12-15 09:45:06.148791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.365 [2024-12-15 09:45:06.148905] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59879 ] 00:07:17.365 [2024-12-15 09:45:06.295448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.626 [2024-12-15 09:45:06.487477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.626 [2024-12-15 09:45:06.488077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.626 [2024-12-15 09:45:06.488249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.887 00:07:17.887 00:07:17.887 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.887 http://cunit.sourceforge.net/ 00:07:17.887 00:07:17.887 00:07:17.887 Suite: accel_dif 00:07:17.887 Test: verify: DIF generated, GUARD check ...passed 00:07:17.887 Test: verify: DIF generated, APPTAG check ...passed 00:07:17.887 Test: verify: DIF generated, REFTAG check ...passed 00:07:17.887 Test: verify: DIF not generated, GUARD check ...passed 00:07:17.887 Test: verify: DIF not generated, APPTAG check ...passed 00:07:17.887 Test: verify: DIF not generated, REFTAG check ...passed 00:07:17.887 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:17.887 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:17.887 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:17.887 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:17.887 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:17.887 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:17.887 Test: generate copy: DIF generated, GUARD check ...passed 00:07:17.887 Test: generate copy: DIF generated, APTTAG check ...[2024-12-15 09:45:06.711653] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.887 [2024-12-15 09:45:06.711863] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:17.887 [2024-12-15 09:45:06.711939] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.887 [2024-12-15 09:45:06.711968] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:17.887 [2024-12-15 09:45:06.711998] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.887 [2024-12-15 09:45:06.712021] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:17.887 [2024-12-15 09:45:06.712087] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:17.887 [2024-12-15 09:45:06.712266] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:17.887 passed 00:07:17.887 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:17.887 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:17.887 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:17.887 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:17.887 Test: generate copy: iovecs-len validate ...passed 00:07:17.887 Test: generate copy: buffer alignment validate ...passed 00:07:17.887 00:07:17.887 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.887 suites 1 1 n/a 0 0 00:07:17.887 tests 20 20 20 0 0 00:07:17.887 asserts 204 204 204 0 n/a 00:07:17.887 00:07:17.887 Elapsed time = 0.003 seconds 00:07:17.887 [2024-12-15 09:45:06.712569] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:18.852 00:07:18.852 real 0m1.402s 00:07:18.852 user 0m2.577s 00:07:18.852 sys 0m0.175s 00:07:18.852 ************************************ 00:07:18.852 END TEST accel_dif_functional_tests 00:07:18.852 ************************************ 00:07:18.852 09:45:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.852 09:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.852 00:07:18.852 real 1m28.124s 00:07:18.852 user 1m36.225s 00:07:18.852 sys 0m6.517s 00:07:18.852 09:45:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.852 ************************************ 00:07:18.852 END TEST accel 00:07:18.852 ************************************ 00:07:18.852 09:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.852 09:45:07 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:18.852 09:45:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.852 09:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.852 ************************************ 00:07:18.852 START TEST accel_rpc 00:07:18.852 ************************************ 00:07:18.852 09:45:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:18.852 * Looking for test storage... 00:07:18.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:18.852 09:45:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.852 09:45:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.852 09:45:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.852 09:45:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.852 09:45:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.852 09:45:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.852 09:45:07 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.852 09:45:07 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.852 09:45:07 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.852 09:45:07 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.852 09:45:07 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.852 09:45:07 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.852 09:45:07 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.852 09:45:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.852 09:45:07 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.852 09:45:07 -- scripts/common.sh@344 -- # : 1 00:07:18.852 09:45:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.852 09:45:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.852 09:45:07 -- scripts/common.sh@364 -- # decimal 1 00:07:18.852 09:45:07 -- scripts/common.sh@352 -- # local d=1 00:07:18.852 09:45:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.852 09:45:07 -- scripts/common.sh@354 -- # echo 1 00:07:18.852 09:45:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.852 09:45:07 -- scripts/common.sh@365 -- # decimal 2 00:07:18.852 09:45:07 -- scripts/common.sh@352 -- # local d=2 00:07:18.852 09:45:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.852 09:45:07 -- scripts/common.sh@354 -- # echo 2 00:07:18.852 09:45:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.852 09:45:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.852 09:45:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.852 09:45:07 -- scripts/common.sh@367 -- # return 0 00:07:18.852 09:45:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.852 --rc genhtml_branch_coverage=1 00:07:18.852 --rc genhtml_function_coverage=1 00:07:18.852 --rc genhtml_legend=1 00:07:18.852 --rc geninfo_all_blocks=1 00:07:18.852 --rc geninfo_unexecuted_blocks=1 00:07:18.852 00:07:18.852 ' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.852 --rc genhtml_branch_coverage=1 00:07:18.852 --rc genhtml_function_coverage=1 00:07:18.852 --rc genhtml_legend=1 00:07:18.852 --rc geninfo_all_blocks=1 00:07:18.852 --rc geninfo_unexecuted_blocks=1 00:07:18.852 00:07:18.852 ' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.852 --rc genhtml_branch_coverage=1 00:07:18.852 --rc genhtml_function_coverage=1 00:07:18.852 --rc genhtml_legend=1 00:07:18.852 --rc geninfo_all_blocks=1 00:07:18.852 --rc geninfo_unexecuted_blocks=1 00:07:18.852 00:07:18.852 ' 00:07:18.852 09:45:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.852 --rc genhtml_branch_coverage=1 00:07:18.852 --rc genhtml_function_coverage=1 00:07:18.852 --rc genhtml_legend=1 00:07:18.852 --rc geninfo_all_blocks=1 00:07:18.852 --rc geninfo_unexecuted_blocks=1 00:07:18.852 00:07:18.852 ' 00:07:18.852 09:45:07 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.852 09:45:07 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59958 00:07:18.852 09:45:07 -- accel/accel_rpc.sh@15 -- # waitforlisten 59958 00:07:18.852 09:45:07 -- common/autotest_common.sh@829 -- # '[' -z 59958 ']' 00:07:18.852 09:45:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.852 09:45:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.852 09:45:07 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:18.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.852 09:45:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.852 09:45:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.852 09:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:18.852 [2024-12-15 09:45:07.823082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.852 [2024-12-15 09:45:07.823199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:07:19.136 [2024-12-15 09:45:07.969249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.136 [2024-12-15 09:45:08.149169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.136 [2024-12-15 09:45:08.149390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.707 09:45:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.707 09:45:08 -- common/autotest_common.sh@862 -- # return 0 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:19.707 09:45:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.707 09:45:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.707 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.707 ************************************ 00:07:19.707 START TEST accel_assign_opcode 00:07:19.707 ************************************ 00:07:19.707 09:45:08 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:19.707 09:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.707 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.707 [2024-12-15 09:45:08.658022] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:19.707 09:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:19.707 09:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.707 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:19.707 [2024-12-15 09:45:08.665988] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:19.707 09:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.707 09:45:08 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:19.707 09:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.707 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:20.279 09:45:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.279 09:45:09 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:20.279 09:45:09 -- accel/accel_rpc.sh@42 -- # grep software 00:07:20.279 09:45:09 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:20.279 09:45:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.279 09:45:09 -- common/autotest_common.sh@10 -- # set +x 00:07:20.279 09:45:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.279 software 00:07:20.279 00:07:20.279 real 0m0.583s 00:07:20.279 ************************************ 00:07:20.279 END TEST accel_assign_opcode 00:07:20.279 ************************************ 00:07:20.279 user 0m0.030s 00:07:20.279 sys 0m0.008s 00:07:20.279 09:45:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.279 09:45:09 -- common/autotest_common.sh@10 -- # set +x 00:07:20.279 09:45:09 -- accel/accel_rpc.sh@55 -- # killprocess 59958 00:07:20.279 09:45:09 -- common/autotest_common.sh@936 -- # '[' -z 59958 ']' 00:07:20.279 09:45:09 -- common/autotest_common.sh@940 -- # kill -0 59958 00:07:20.279 09:45:09 -- common/autotest_common.sh@941 -- # uname 00:07:20.279 09:45:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:20.279 09:45:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59958 00:07:20.540 killing process with pid 59958 00:07:20.540 09:45:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:20.540 09:45:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:20.540 09:45:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59958' 00:07:20.540 09:45:09 -- common/autotest_common.sh@955 -- # kill 59958 00:07:20.540 09:45:09 -- common/autotest_common.sh@960 -- # wait 59958 00:07:21.922 00:07:21.922 real 0m3.307s 00:07:21.922 user 0m3.256s 00:07:21.922 sys 0m0.395s 00:07:21.922 09:45:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.922 ************************************ 00:07:21.922 END TEST accel_rpc 00:07:21.922 ************************************ 00:07:21.922 09:45:10 -- common/autotest_common.sh@10 -- # set +x 00:07:22.180 09:45:10 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.180 09:45:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.180 09:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.180 09:45:10 -- common/autotest_common.sh@10 -- # set +x 00:07:22.180 ************************************ 00:07:22.180 START TEST app_cmdline 00:07:22.180 ************************************ 00:07:22.180 09:45:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.180 * Looking for test storage... 00:07:22.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.180 09:45:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.180 09:45:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.180 09:45:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.180 09:45:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.180 09:45:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.180 09:45:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.180 09:45:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.180 09:45:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.180 09:45:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.180 09:45:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.180 09:45:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.180 09:45:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.180 09:45:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.180 09:45:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.180 09:45:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.180 09:45:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.180 09:45:11 -- scripts/common.sh@344 -- # : 1 00:07:22.180 09:45:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.180 09:45:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.180 09:45:11 -- scripts/common.sh@364 -- # decimal 1 00:07:22.180 09:45:11 -- scripts/common.sh@352 -- # local d=1 00:07:22.180 09:45:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.181 09:45:11 -- scripts/common.sh@354 -- # echo 1 00:07:22.181 09:45:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.181 09:45:11 -- scripts/common.sh@365 -- # decimal 2 00:07:22.181 09:45:11 -- scripts/common.sh@352 -- # local d=2 00:07:22.181 09:45:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.181 09:45:11 -- scripts/common.sh@354 -- # echo 2 00:07:22.181 09:45:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.181 09:45:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.181 09:45:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.181 09:45:11 -- scripts/common.sh@367 -- # return 0 00:07:22.181 09:45:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.181 09:45:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.181 --rc genhtml_branch_coverage=1 00:07:22.181 --rc genhtml_function_coverage=1 00:07:22.181 --rc genhtml_legend=1 00:07:22.181 --rc geninfo_all_blocks=1 00:07:22.181 --rc geninfo_unexecuted_blocks=1 00:07:22.181 00:07:22.181 ' 00:07:22.181 09:45:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.181 --rc genhtml_branch_coverage=1 00:07:22.181 --rc genhtml_function_coverage=1 00:07:22.181 --rc genhtml_legend=1 00:07:22.181 --rc geninfo_all_blocks=1 00:07:22.181 --rc geninfo_unexecuted_blocks=1 00:07:22.181 00:07:22.181 ' 00:07:22.181 09:45:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.181 --rc genhtml_branch_coverage=1 00:07:22.181 --rc genhtml_function_coverage=1 00:07:22.181 --rc genhtml_legend=1 00:07:22.181 --rc geninfo_all_blocks=1 00:07:22.181 --rc geninfo_unexecuted_blocks=1 00:07:22.181 00:07:22.181 ' 00:07:22.181 09:45:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.181 --rc genhtml_branch_coverage=1 00:07:22.181 --rc genhtml_function_coverage=1 00:07:22.181 --rc genhtml_legend=1 00:07:22.181 --rc geninfo_all_blocks=1 00:07:22.181 --rc geninfo_unexecuted_blocks=1 00:07:22.181 00:07:22.181 ' 00:07:22.181 09:45:11 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.181 09:45:11 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60076 00:07:22.181 09:45:11 -- app/cmdline.sh@18 -- # waitforlisten 60076 00:07:22.181 09:45:11 -- common/autotest_common.sh@829 -- # '[' -z 60076 ']' 00:07:22.181 09:45:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.181 09:45:11 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.181 09:45:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.181 09:45:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.181 09:45:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.181 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.181 [2024-12-15 09:45:11.181651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.181 [2024-12-15 09:45:11.181764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60076 ] 00:07:22.481 [2024-12-15 09:45:11.331028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.742 [2024-12-15 09:45:11.506452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.742 [2024-12-15 09:45:11.506654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.683 09:45:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.683 09:45:12 -- common/autotest_common.sh@862 -- # return 0 00:07:23.683 09:45:12 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:23.944 { 00:07:23.944 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:23.944 "fields": { 00:07:23.944 "major": 24, 00:07:23.944 "minor": 1, 00:07:23.944 "patch": 1, 00:07:23.944 "suffix": "-pre", 00:07:23.944 "commit": "c13c99a5e" 00:07:23.944 } 00:07:23.944 } 00:07:23.944 09:45:12 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.944 09:45:12 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.944 09:45:12 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.944 09:45:12 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.944 09:45:12 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.944 09:45:12 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.944 09:45:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.944 09:45:12 -- app/cmdline.sh@26 -- # sort 00:07:23.944 09:45:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.944 09:45:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.944 09:45:12 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.944 09:45:12 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.944 09:45:12 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.944 09:45:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:23.944 09:45:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.944 09:45:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.944 09:45:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.944 09:45:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.944 09:45:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.944 09:45:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.944 09:45:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.944 09:45:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.944 09:45:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:23.944 09:45:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.206 request: 00:07:24.206 { 00:07:24.206 "method": "env_dpdk_get_mem_stats", 00:07:24.206 "req_id": 1 00:07:24.206 } 00:07:24.206 Got JSON-RPC error response 00:07:24.206 response: 00:07:24.206 { 00:07:24.206 "code": -32601, 00:07:24.206 "message": "Method not found" 00:07:24.206 } 00:07:24.206 09:45:13 -- common/autotest_common.sh@653 -- # es=1 00:07:24.206 09:45:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.206 09:45:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.206 09:45:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.206 09:45:13 -- app/cmdline.sh@1 -- # killprocess 60076 00:07:24.206 09:45:13 -- common/autotest_common.sh@936 -- # '[' -z 60076 ']' 00:07:24.206 09:45:13 -- common/autotest_common.sh@940 -- # kill -0 60076 00:07:24.206 09:45:13 -- common/autotest_common.sh@941 -- # uname 00:07:24.206 09:45:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.206 09:45:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60076 00:07:24.206 09:45:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.206 09:45:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.206 killing process with pid 60076 00:07:24.206 09:45:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60076' 00:07:24.206 09:45:13 -- common/autotest_common.sh@955 -- # kill 60076 00:07:24.206 09:45:13 -- common/autotest_common.sh@960 -- # wait 60076 00:07:26.117 00:07:26.117 real 0m3.661s 00:07:26.117 user 0m4.080s 00:07:26.117 sys 0m0.442s 00:07:26.117 09:45:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.117 ************************************ 00:07:26.117 END TEST app_cmdline 00:07:26.117 ************************************ 00:07:26.117 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:26.117 09:45:14 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:26.117 09:45:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.117 09:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.118 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:26.118 ************************************ 00:07:26.118 START TEST version 00:07:26.118 ************************************ 00:07:26.118 09:45:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:26.118 * Looking for test storage... 00:07:26.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.118 09:45:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.118 09:45:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.118 09:45:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.118 09:45:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.118 09:45:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.118 09:45:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.118 09:45:14 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.118 09:45:14 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.118 09:45:14 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.118 09:45:14 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.118 09:45:14 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.118 09:45:14 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.118 09:45:14 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.118 09:45:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.118 09:45:14 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.118 09:45:14 -- scripts/common.sh@344 -- # : 1 00:07:26.118 09:45:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.118 09:45:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.118 09:45:14 -- scripts/common.sh@364 -- # decimal 1 00:07:26.118 09:45:14 -- scripts/common.sh@352 -- # local d=1 00:07:26.118 09:45:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.118 09:45:14 -- scripts/common.sh@354 -- # echo 1 00:07:26.118 09:45:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.118 09:45:14 -- scripts/common.sh@365 -- # decimal 2 00:07:26.118 09:45:14 -- scripts/common.sh@352 -- # local d=2 00:07:26.118 09:45:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.118 09:45:14 -- scripts/common.sh@354 -- # echo 2 00:07:26.118 09:45:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.118 09:45:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.118 09:45:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.118 09:45:14 -- scripts/common.sh@367 -- # return 0 00:07:26.118 09:45:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.118 --rc genhtml_branch_coverage=1 00:07:26.118 --rc genhtml_function_coverage=1 00:07:26.118 --rc genhtml_legend=1 00:07:26.118 --rc geninfo_all_blocks=1 00:07:26.118 --rc geninfo_unexecuted_blocks=1 00:07:26.118 00:07:26.118 ' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.118 --rc genhtml_branch_coverage=1 00:07:26.118 --rc genhtml_function_coverage=1 00:07:26.118 --rc genhtml_legend=1 00:07:26.118 --rc geninfo_all_blocks=1 00:07:26.118 --rc geninfo_unexecuted_blocks=1 00:07:26.118 00:07:26.118 ' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.118 --rc genhtml_branch_coverage=1 00:07:26.118 --rc genhtml_function_coverage=1 00:07:26.118 --rc genhtml_legend=1 00:07:26.118 --rc geninfo_all_blocks=1 00:07:26.118 --rc geninfo_unexecuted_blocks=1 00:07:26.118 00:07:26.118 ' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.118 --rc genhtml_branch_coverage=1 00:07:26.118 --rc genhtml_function_coverage=1 00:07:26.118 --rc genhtml_legend=1 00:07:26.118 --rc geninfo_all_blocks=1 00:07:26.118 --rc geninfo_unexecuted_blocks=1 00:07:26.118 00:07:26.118 ' 00:07:26.118 09:45:14 -- app/version.sh@17 -- # get_header_version major 00:07:26.118 09:45:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.118 09:45:14 -- app/version.sh@14 -- # cut -f2 00:07:26.118 09:45:14 -- app/version.sh@14 -- # tr -d '"' 00:07:26.118 09:45:14 -- app/version.sh@17 -- # major=24 00:07:26.118 09:45:14 -- app/version.sh@18 -- # get_header_version minor 00:07:26.118 09:45:14 -- app/version.sh@14 -- # cut -f2 00:07:26.118 09:45:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.118 09:45:14 -- app/version.sh@14 -- # tr -d '"' 00:07:26.118 09:45:14 -- app/version.sh@18 -- # minor=1 00:07:26.118 09:45:14 -- app/version.sh@19 -- # get_header_version patch 00:07:26.118 09:45:14 -- app/version.sh@14 -- # cut -f2 00:07:26.118 09:45:14 -- app/version.sh@14 -- # tr -d '"' 00:07:26.118 09:45:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.118 09:45:14 -- app/version.sh@19 -- # patch=1 00:07:26.118 09:45:14 -- app/version.sh@20 -- # get_header_version suffix 00:07:26.118 09:45:14 -- app/version.sh@14 -- # cut -f2 00:07:26.118 09:45:14 -- app/version.sh@14 -- # tr -d '"' 00:07:26.118 09:45:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:26.118 09:45:14 -- app/version.sh@20 -- # suffix=-pre 00:07:26.118 09:45:14 -- app/version.sh@22 -- # version=24.1 00:07:26.118 09:45:14 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:26.118 09:45:14 -- app/version.sh@25 -- # version=24.1.1 00:07:26.118 09:45:14 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:26.118 09:45:14 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:26.118 09:45:14 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:26.118 09:45:14 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:26.118 09:45:14 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:26.118 ************************************ 00:07:26.118 END TEST version 00:07:26.118 ************************************ 00:07:26.118 00:07:26.118 real 0m0.189s 00:07:26.118 user 0m0.119s 00:07:26.118 sys 0m0.095s 00:07:26.118 09:45:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.118 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:26.118 09:45:14 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:26.118 09:45:14 -- spdk/autotest.sh@191 -- # uname -s 00:07:26.118 09:45:14 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:26.118 09:45:14 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:26.118 09:45:14 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:26.118 09:45:14 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:26.118 09:45:14 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:26.118 09:45:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.118 09:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.118 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:07:26.118 ************************************ 00:07:26.118 START TEST blockdev_nvme 00:07:26.118 ************************************ 00:07:26.118 09:45:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:26.118 * Looking for test storage... 00:07:26.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:26.118 09:45:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.118 09:45:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.118 09:45:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.118 09:45:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.118 09:45:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.118 09:45:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.118 09:45:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.118 09:45:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.118 09:45:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.118 09:45:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.118 09:45:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.118 09:45:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.118 09:45:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.118 09:45:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.119 09:45:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.119 09:45:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.119 09:45:15 -- scripts/common.sh@344 -- # : 1 00:07:26.119 09:45:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.119 09:45:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.119 09:45:15 -- scripts/common.sh@364 -- # decimal 1 00:07:26.119 09:45:15 -- scripts/common.sh@352 -- # local d=1 00:07:26.119 09:45:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.119 09:45:15 -- scripts/common.sh@354 -- # echo 1 00:07:26.119 09:45:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.119 09:45:15 -- scripts/common.sh@365 -- # decimal 2 00:07:26.119 09:45:15 -- scripts/common.sh@352 -- # local d=2 00:07:26.119 09:45:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.119 09:45:15 -- scripts/common.sh@354 -- # echo 2 00:07:26.119 09:45:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.119 09:45:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.119 09:45:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.119 09:45:15 -- scripts/common.sh@367 -- # return 0 00:07:26.119 09:45:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.119 09:45:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.119 --rc genhtml_branch_coverage=1 00:07:26.119 --rc genhtml_function_coverage=1 00:07:26.119 --rc genhtml_legend=1 00:07:26.119 --rc geninfo_all_blocks=1 00:07:26.119 --rc geninfo_unexecuted_blocks=1 00:07:26.119 00:07:26.119 ' 00:07:26.119 09:45:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.119 --rc genhtml_branch_coverage=1 00:07:26.119 --rc genhtml_function_coverage=1 00:07:26.119 --rc genhtml_legend=1 00:07:26.119 --rc geninfo_all_blocks=1 00:07:26.119 --rc geninfo_unexecuted_blocks=1 00:07:26.119 00:07:26.119 ' 00:07:26.119 09:45:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.119 --rc genhtml_branch_coverage=1 00:07:26.119 --rc genhtml_function_coverage=1 00:07:26.119 --rc genhtml_legend=1 00:07:26.119 --rc geninfo_all_blocks=1 00:07:26.119 --rc geninfo_unexecuted_blocks=1 00:07:26.119 00:07:26.119 ' 00:07:26.119 09:45:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.119 --rc genhtml_branch_coverage=1 00:07:26.119 --rc genhtml_function_coverage=1 00:07:26.119 --rc genhtml_legend=1 00:07:26.119 --rc geninfo_all_blocks=1 00:07:26.119 --rc geninfo_unexecuted_blocks=1 00:07:26.119 00:07:26.119 ' 00:07:26.119 09:45:15 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:26.119 09:45:15 -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.119 09:45:15 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:26.119 09:45:15 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.119 09:45:15 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:26.119 09:45:15 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:26.119 09:45:15 -- bdev/blockdev.sh@18 -- # : 00:07:26.119 09:45:15 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:26.119 09:45:15 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:26.119 09:45:15 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:26.119 09:45:15 -- bdev/blockdev.sh@672 -- # uname -s 00:07:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.119 09:45:15 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:26.119 09:45:15 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:26.119 09:45:15 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:26.119 09:45:15 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:26.119 09:45:15 -- bdev/blockdev.sh@682 -- # dek= 00:07:26.119 09:45:15 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:26.119 09:45:15 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:26.119 09:45:15 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:26.119 09:45:15 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:26.119 09:45:15 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:26.119 09:45:15 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:26.119 09:45:15 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60253 00:07:26.119 09:45:15 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.119 09:45:15 -- bdev/blockdev.sh@47 -- # waitforlisten 60253 00:07:26.119 09:45:15 -- common/autotest_common.sh@829 -- # '[' -z 60253 ']' 00:07:26.119 09:45:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.119 09:45:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.119 09:45:15 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.119 09:45:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.119 09:45:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.119 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:26.379 [2024-12-15 09:45:15.143216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.379 [2024-12-15 09:45:15.143439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:07:26.379 [2024-12-15 09:45:15.293132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.639 [2024-12-15 09:45:15.477003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.639 [2024-12-15 09:45:15.477346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.024 09:45:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.024 09:45:16 -- common/autotest_common.sh@862 -- # return 0 00:07:28.024 09:45:16 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:28.024 09:45:16 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:28.024 09:45:16 -- bdev/blockdev.sh@79 -- # local json 00:07:28.024 09:45:16 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:28.024 09:45:16 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:28.024 09:45:16 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:28.024 09:45:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.024 09:45:16 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:28.024 09:45:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.024 09:45:16 -- bdev/blockdev.sh@738 -- # cat 00:07:28.024 09:45:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:28.024 09:45:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.024 09:45:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:28.024 09:45:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.024 09:45:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:28.024 09:45:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.024 09:45:17 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:28.024 09:45:17 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:28.024 09:45:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.024 09:45:17 -- common/autotest_common.sh@10 -- # set +x 00:07:28.024 09:45:17 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:28.285 09:45:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.286 09:45:17 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:28.286 09:45:17 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "33614d56-78c8-4dd1-bee7-e3fdee56d545"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "33614d56-78c8-4dd1-bee7-e3fdee56d545",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "62c5e88f-ceb8-457b-b429-26422a42afba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "62c5e88f-ceb8-457b-b429-26422a42afba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "901dc127-9978-4473-a967-aa14d8a85d32"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "901dc127-9978-4473-a967-aa14d8a85d32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9cad1664-6cab-4b38-95a2-88dfdc032da0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9cad1664-6cab-4b38-95a2-88dfdc032da0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e492ba47-d7be-4d4e-a88e-65288d2a70f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e492ba47-d7be-4d4e-a88e-65288d2a70f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bf91bd73-2d83-40e4-b9b0-2542c7001707"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bf91bd73-2d83-40e4-b9b0-2542c7001707",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:28.286 09:45:17 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:28.286 09:45:17 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:28.286 09:45:17 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:28.286 09:45:17 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:28.286 09:45:17 -- bdev/blockdev.sh@752 -- # killprocess 60253 00:07:28.286 09:45:17 -- common/autotest_common.sh@936 -- # '[' -z 60253 ']' 00:07:28.286 09:45:17 -- common/autotest_common.sh@940 -- # kill -0 60253 00:07:28.286 09:45:17 -- common/autotest_common.sh@941 -- # uname 00:07:28.286 09:45:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.286 09:45:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60253 00:07:28.286 killing process with pid 60253 00:07:28.286 09:45:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.286 09:45:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.286 09:45:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60253' 00:07:28.286 09:45:17 -- common/autotest_common.sh@955 -- # kill 60253 00:07:28.286 09:45:17 -- common/autotest_common.sh@960 -- # wait 60253 00:07:29.731 09:45:18 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:29.731 09:45:18 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:29.731 09:45:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.731 09:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.731 09:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:29.731 ************************************ 00:07:29.731 START TEST bdev_hello_world 00:07:29.731 ************************************ 00:07:29.731 09:45:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:29.990 [2024-12-15 09:45:18.762842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.990 [2024-12-15 09:45:18.763155] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:07:29.990 [2024-12-15 09:45:18.913227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.250 [2024-12-15 09:45:19.167033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.821 [2024-12-15 09:45:19.761378] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:30.821 [2024-12-15 09:45:19.761439] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:30.822 [2024-12-15 09:45:19.761463] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:30.822 [2024-12-15 09:45:19.764122] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:30.822 [2024-12-15 09:45:19.764847] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:30.822 [2024-12-15 09:45:19.765005] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:30.822 [2024-12-15 09:45:19.765297] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:30.822 00:07:30.822 [2024-12-15 09:45:19.765325] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:31.767 00:07:31.767 real 0m1.944s 00:07:31.767 user 0m1.596s 00:07:31.767 sys 0m0.232s 00:07:31.767 09:45:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.767 ************************************ 00:07:31.767 END TEST bdev_hello_world 00:07:31.767 ************************************ 00:07:31.767 09:45:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.767 09:45:20 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:31.767 09:45:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.767 09:45:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.767 09:45:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.767 ************************************ 00:07:31.767 START TEST bdev_bounds 00:07:31.767 ************************************ 00:07:31.767 09:45:20 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:31.767 09:45:20 -- bdev/blockdev.sh@288 -- # bdevio_pid=60391 00:07:31.767 09:45:20 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.767 09:45:20 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:31.767 Process bdevio pid: 60391 00:07:31.767 09:45:20 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60391' 00:07:31.767 09:45:20 -- bdev/blockdev.sh@291 -- # waitforlisten 60391 00:07:31.767 09:45:20 -- common/autotest_common.sh@829 -- # '[' -z 60391 ']' 00:07:31.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.767 09:45:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.767 09:45:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.767 09:45:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.767 09:45:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.767 09:45:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.767 [2024-12-15 09:45:20.778704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.767 [2024-12-15 09:45:20.778848] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60391 ] 00:07:32.029 [2024-12-15 09:45:20.934414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.290 [2024-12-15 09:45:21.179287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.290 [2024-12-15 09:45:21.179914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.290 [2024-12-15 09:45:21.180014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.674 09:45:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.674 09:45:22 -- common/autotest_common.sh@862 -- # return 0 00:07:33.674 09:45:22 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:33.674 I/O targets: 00:07:33.674 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:33.674 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:33.674 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.674 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.674 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:33.674 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:33.674 00:07:33.674 00:07:33.674 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.674 http://cunit.sourceforge.net/ 00:07:33.674 00:07:33.674 00:07:33.674 Suite: bdevio tests on: Nvme3n1 00:07:33.674 Test: blockdev write read block ...passed 00:07:33.674 Test: blockdev write zeroes read block ...passed 00:07:33.674 Test: blockdev write zeroes read no split ...passed 00:07:33.674 Test: blockdev write zeroes read split ...passed 00:07:33.674 Test: blockdev write zeroes read split partial ...passed 00:07:33.674 Test: blockdev reset ...[2024-12-15 09:45:22.498001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:33.674 [2024-12-15 09:45:22.502732] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:33.674 passed 00:07:33.674 Test: blockdev write read 8 blocks ...passed 00:07:33.674 Test: blockdev write read size > 128k ...passed 00:07:33.674 Test: blockdev write read invalid size ...passed 00:07:33.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.674 Test: blockdev write read max offset ...passed 00:07:33.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.674 Test: blockdev writev readv 8 blocks ...passed 00:07:33.674 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.674 Test: blockdev writev readv block ...passed 00:07:33.674 Test: blockdev writev readv size > 128k ...passed 00:07:33.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.674 Test: blockdev comparev and writev ...[2024-12-15 09:45:22.519125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27140e000 len:0x1000 00:07:33.674 [2024-12-15 09:45:22.519336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.674 passed 00:07:33.674 Test: blockdev nvme passthru rw ...passed 00:07:33.674 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.520784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:33.674 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:33.674 [2024-12-15 09:45:22.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.674 passed 00:07:33.674 Test: blockdev copy ...passed 00:07:33.674 Suite: bdevio tests on: Nvme2n3 00:07:33.674 Test: blockdev write read block ...passed 00:07:33.674 Test: blockdev write zeroes read block ...passed 00:07:33.674 Test: blockdev write zeroes read no split ...passed 00:07:33.674 Test: blockdev write zeroes read split ...passed 00:07:33.674 Test: blockdev write zeroes read split partial ...passed 00:07:33.674 Test: blockdev reset ...[2024-12-15 09:45:22.588248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:33.674 [2024-12-15 09:45:22.591938] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:33.674 passed 00:07:33.674 Test: blockdev write read 8 blocks ...passed 00:07:33.674 Test: blockdev write read size > 128k ...passed 00:07:33.674 Test: blockdev write read invalid size ...passed 00:07:33.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.674 Test: blockdev write read max offset ...passed 00:07:33.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.674 Test: blockdev writev readv 8 blocks ...passed 00:07:33.674 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.674 Test: blockdev writev readv block ...passed 00:07:33.674 Test: blockdev writev readv size > 128k ...passed 00:07:33.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.674 Test: blockdev comparev and writev ...[2024-12-15 09:45:22.611451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27140a000 len:0x1000 00:07:33.674 [2024-12-15 09:45:22.611725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.674 passed 00:07:33.674 Test: blockdev nvme passthru rw ...passed 00:07:33.674 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.614941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:33.674 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:33.674 [2024-12-15 09:45:22.615195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.674 passed 00:07:33.674 Test: blockdev copy ...passed 00:07:33.674 Suite: bdevio tests on: Nvme2n2 00:07:33.674 Test: blockdev write read block ...passed 00:07:33.674 Test: blockdev write zeroes read block ...passed 00:07:33.674 Test: blockdev write zeroes read no split ...passed 00:07:33.674 Test: blockdev write zeroes read split ...passed 00:07:33.674 Test: blockdev write zeroes read split partial ...passed 00:07:33.674 Test: blockdev reset ...[2024-12-15 09:45:22.676624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:33.674 [2024-12-15 09:45:22.680929] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:33.674 passed 00:07:33.674 Test: blockdev write read 8 blocks ...passed 00:07:33.674 Test: blockdev write read size > 128k ...passed 00:07:33.674 Test: blockdev write read invalid size ...passed 00:07:33.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.674 Test: blockdev write read max offset ...passed 00:07:33.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.935 Test: blockdev writev readv 8 blocks ...passed 00:07:33.935 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.935 Test: blockdev writev readv block ...passed 00:07:33.935 Test: blockdev writev readv size > 128k ...passed 00:07:33.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.935 Test: blockdev comparev and writev ...[2024-12-15 09:45:22.702921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271c06000 len:0x1000 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme passthru rw ...[2024-12-15 09:45:22.703185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.706097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:33.935 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:33.935 [2024-12-15 09:45:22.706497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev copy ...passed 00:07:33.935 Suite: bdevio tests on: Nvme2n1 00:07:33.935 Test: blockdev write read block ...passed 00:07:33.935 Test: blockdev write zeroes read block ...passed 00:07:33.935 Test: blockdev write zeroes read no split ...passed 00:07:33.935 Test: blockdev write zeroes read split ...passed 00:07:33.935 Test: blockdev write zeroes read split partial ...passed 00:07:33.935 Test: blockdev reset ...[2024-12-15 09:45:22.769953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:33.935 [2024-12-15 09:45:22.774723] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:33.935 passed 00:07:33.935 Test: blockdev write read 8 blocks ...passed 00:07:33.935 Test: blockdev write read size > 128k ...passed 00:07:33.935 Test: blockdev write read invalid size ...passed 00:07:33.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.935 Test: blockdev write read max offset ...passed 00:07:33.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.935 Test: blockdev writev readv 8 blocks ...passed 00:07:33.935 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.935 Test: blockdev writev readv block ...passed 00:07:33.935 Test: blockdev writev readv size > 128k ...passed 00:07:33.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.935 Test: blockdev comparev and writev ...[2024-12-15 09:45:22.794311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271c01000 len:0x1000 00:07:33.935 [2024-12-15 09:45:22.794526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme passthru rw ...passed 00:07:33.935 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.797459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme admin passthru ...[2024-12-15 09:45:22.797687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev copy ...passed 00:07:33.935 Suite: bdevio tests on: Nvme1n1 00:07:33.935 Test: blockdev write read block ...passed 00:07:33.935 Test: blockdev write zeroes read block ...passed 00:07:33.935 Test: blockdev write zeroes read no split ...passed 00:07:33.935 Test: blockdev write zeroes read split ...passed 00:07:33.935 Test: blockdev write zeroes read split partial ...passed 00:07:33.935 Test: blockdev reset ...[2024-12-15 09:45:22.869965] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:33.935 [2024-12-15 09:45:22.874731] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:33.935 passed 00:07:33.935 Test: blockdev write read 8 blocks ...passed 00:07:33.935 Test: blockdev write read size > 128k ...passed 00:07:33.935 Test: blockdev write read invalid size ...passed 00:07:33.935 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:33.935 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:33.935 Test: blockdev write read max offset ...passed 00:07:33.935 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:33.935 Test: blockdev writev readv 8 blocks ...passed 00:07:33.935 Test: blockdev writev readv 30 x 1block ...passed 00:07:33.935 Test: blockdev writev readv block ...passed 00:07:33.935 Test: blockdev writev readv size > 128k ...passed 00:07:33.935 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:33.935 Test: blockdev comparev and writev ...[2024-12-15 09:45:22.893937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c606000 len:0x1000 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme passthru rw ...[2024-12-15 09:45:22.894241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.896747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:33.935 passed 00:07:33.935 Test: blockdev nvme admin passthru ...[2024-12-15 09:45:22.896950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:33.935 passed 00:07:33.935 Test: blockdev copy ...passed 00:07:33.935 Suite: bdevio tests on: Nvme0n1 00:07:33.935 Test: blockdev write read block ...passed 00:07:33.935 Test: blockdev write zeroes read block ...passed 00:07:33.935 Test: blockdev write zeroes read no split ...passed 00:07:33.935 Test: blockdev write zeroes read split ...passed 00:07:34.197 Test: blockdev write zeroes read split partial ...passed 00:07:34.197 Test: blockdev reset ...[2024-12-15 09:45:22.964622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:34.197 [2024-12-15 09:45:22.968991] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:34.197 passed 00:07:34.197 Test: blockdev write read 8 blocks ...passed 00:07:34.197 Test: blockdev write read size > 128k ...passed 00:07:34.197 Test: blockdev write read invalid size ...passed 00:07:34.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:34.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:34.197 Test: blockdev write read max offset ...passed 00:07:34.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:34.197 Test: blockdev writev readv 8 blocks ...passed 00:07:34.197 Test: blockdev writev readv 30 x 1block ...passed 00:07:34.197 Test: blockdev writev readv block ...passed 00:07:34.197 Test: blockdev writev readv size > 128k ...passed 00:07:34.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:34.197 Test: blockdev comparev and writev ...passed 00:07:34.197 Test: blockdev nvme passthru rw ...[2024-12-15 09:45:22.985102] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:34.197 separate metadata which is not supported yet. 00:07:34.197 passed 00:07:34.197 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:45:22.987004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:34.197 passed 00:07:34.197 Test: blockdev nvme admin passthru ...[2024-12-15 09:45:22.987225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:34.197 passed 00:07:34.197 Test: blockdev copy ...passed 00:07:34.197 00:07:34.197 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.197 suites 6 6 n/a 0 0 00:07:34.197 tests 138 138 138 0 0 00:07:34.197 asserts 893 893 893 0 n/a 00:07:34.197 00:07:34.197 Elapsed time = 1.432 seconds 00:07:34.197 0 00:07:34.197 09:45:23 -- bdev/blockdev.sh@293 -- # killprocess 60391 00:07:34.197 09:45:23 -- common/autotest_common.sh@936 -- # '[' -z 60391 ']' 00:07:34.197 09:45:23 -- common/autotest_common.sh@940 -- # kill -0 60391 00:07:34.197 09:45:23 -- common/autotest_common.sh@941 -- # uname 00:07:34.197 09:45:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.197 09:45:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60391 00:07:34.197 killing process with pid 60391 00:07:34.197 09:45:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.197 09:45:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.197 09:45:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60391' 00:07:34.197 09:45:23 -- common/autotest_common.sh@955 -- # kill 60391 00:07:34.197 09:45:23 -- common/autotest_common.sh@960 -- # wait 60391 00:07:35.140 ************************************ 00:07:35.140 END TEST bdev_bounds 00:07:35.140 ************************************ 00:07:35.140 09:45:23 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:35.140 00:07:35.140 real 0m3.139s 00:07:35.140 user 0m7.975s 00:07:35.140 sys 0m0.397s 00:07:35.140 09:45:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.140 09:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 09:45:23 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:35.140 09:45:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:35.140 09:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.140 09:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 ************************************ 00:07:35.140 START TEST bdev_nbd 00:07:35.140 ************************************ 00:07:35.140 09:45:23 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:35.140 09:45:23 -- bdev/blockdev.sh@298 -- # uname -s 00:07:35.140 09:45:23 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:35.140 09:45:23 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.140 09:45:23 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.140 09:45:23 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.140 09:45:23 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:35.140 09:45:23 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:35.140 09:45:23 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:35.140 09:45:23 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:35.140 09:45:23 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:35.140 09:45:23 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:35.140 09:45:23 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:35.140 09:45:23 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:35.140 09:45:23 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.140 09:45:23 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:35.140 09:45:23 -- bdev/blockdev.sh@316 -- # nbd_pid=60459 00:07:35.140 09:45:23 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:35.140 09:45:23 -- bdev/blockdev.sh@318 -- # waitforlisten 60459 /var/tmp/spdk-nbd.sock 00:07:35.140 09:45:23 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:35.140 09:45:23 -- common/autotest_common.sh@829 -- # '[' -z 60459 ']' 00:07:35.140 09:45:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.140 09:45:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.140 09:45:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.140 09:45:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.140 09:45:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 [2024-12-15 09:45:23.995081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.141 [2024-12-15 09:45:23.995499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.141 [2024-12-15 09:45:24.152029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.712 [2024-12-15 09:45:24.440785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.652 09:45:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.652 09:45:25 -- common/autotest_common.sh@862 -- # return 0 00:07:36.652 09:45:25 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@24 -- # local i 00:07:36.652 09:45:25 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:36.653 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:36.653 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.653 09:45:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:36.914 09:45:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:36.914 09:45:25 -- common/autotest_common.sh@867 -- # local i 00:07:36.914 09:45:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:36.914 09:45:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:36.914 09:45:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:36.914 09:45:25 -- common/autotest_common.sh@871 -- # break 00:07:36.914 09:45:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:36.914 09:45:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:36.914 09:45:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.914 1+0 records in 00:07:36.914 1+0 records out 00:07:36.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708284 s, 5.8 MB/s 00:07:36.914 09:45:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.914 09:45:25 -- common/autotest_common.sh@884 -- # size=4096 00:07:36.914 09:45:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.914 09:45:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:36.914 09:45:25 -- common/autotest_common.sh@887 -- # return 0 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.914 09:45:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:37.175 09:45:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:37.175 09:45:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:37.175 09:45:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:37.175 09:45:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:37.175 09:45:25 -- common/autotest_common.sh@867 -- # local i 00:07:37.175 09:45:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.175 09:45:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.176 09:45:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:37.176 09:45:25 -- common/autotest_common.sh@871 -- # break 00:07:37.176 09:45:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.176 09:45:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.176 09:45:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.176 1+0 records in 00:07:37.176 1+0 records out 00:07:37.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013187 s, 3.1 MB/s 00:07:37.176 09:45:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.176 09:45:25 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.176 09:45:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.176 09:45:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.176 09:45:25 -- common/autotest_common.sh@887 -- # return 0 00:07:37.176 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.176 09:45:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.176 09:45:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:37.176 09:45:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:37.176 09:45:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:37.438 09:45:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:37.438 09:45:26 -- common/autotest_common.sh@867 -- # local i 00:07:37.438 09:45:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:37.438 09:45:26 -- common/autotest_common.sh@871 -- # break 00:07:37.438 09:45:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.438 1+0 records in 00:07:37.438 1+0 records out 00:07:37.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977268 s, 4.2 MB/s 00:07:37.438 09:45:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.438 09:45:26 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.438 09:45:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.438 09:45:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.438 09:45:26 -- common/autotest_common.sh@887 -- # return 0 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:37.438 09:45:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:37.438 09:45:26 -- common/autotest_common.sh@867 -- # local i 00:07:37.438 09:45:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:37.438 09:45:26 -- common/autotest_common.sh@871 -- # break 00:07:37.438 09:45:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.438 09:45:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.438 1+0 records in 00:07:37.438 1+0 records out 00:07:37.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887243 s, 4.6 MB/s 00:07:37.438 09:45:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.438 09:45:26 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.438 09:45:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.438 09:45:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.438 09:45:26 -- common/autotest_common.sh@887 -- # return 0 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.438 09:45:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:37.700 09:45:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:37.700 09:45:26 -- common/autotest_common.sh@867 -- # local i 00:07:37.700 09:45:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.700 09:45:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.700 09:45:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:37.700 09:45:26 -- common/autotest_common.sh@871 -- # break 00:07:37.700 09:45:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.700 09:45:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.700 09:45:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.700 1+0 records in 00:07:37.700 1+0 records out 00:07:37.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861636 s, 4.8 MB/s 00:07:37.700 09:45:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.700 09:45:26 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.700 09:45:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.700 09:45:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.700 09:45:26 -- common/autotest_common.sh@887 -- # return 0 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.700 09:45:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:37.960 09:45:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:37.960 09:45:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:37.960 09:45:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:37.961 09:45:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:37.961 09:45:26 -- common/autotest_common.sh@867 -- # local i 00:07:37.961 09:45:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:37.961 09:45:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:37.961 09:45:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:37.961 09:45:26 -- common/autotest_common.sh@871 -- # break 00:07:37.961 09:45:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:37.961 09:45:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:37.961 09:45:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.961 1+0 records in 00:07:37.961 1+0 records out 00:07:37.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000962257 s, 4.3 MB/s 00:07:37.961 09:45:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.961 09:45:26 -- common/autotest_common.sh@884 -- # size=4096 00:07:37.961 09:45:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.961 09:45:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:37.961 09:45:26 -- common/autotest_common.sh@887 -- # return 0 00:07:37.961 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.961 09:45:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:37.961 09:45:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd0", 00:07:38.221 "bdev_name": "Nvme0n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd1", 00:07:38.221 "bdev_name": "Nvme1n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd2", 00:07:38.221 "bdev_name": "Nvme2n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd3", 00:07:38.221 "bdev_name": "Nvme2n2" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd4", 00:07:38.221 "bdev_name": "Nvme2n3" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd5", 00:07:38.221 "bdev_name": "Nvme3n1" 00:07:38.221 } 00:07:38.221 ]' 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd0", 00:07:38.221 "bdev_name": "Nvme0n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd1", 00:07:38.221 "bdev_name": "Nvme1n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd2", 00:07:38.221 "bdev_name": "Nvme2n1" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd3", 00:07:38.221 "bdev_name": "Nvme2n2" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd4", 00:07:38.221 "bdev_name": "Nvme2n3" 00:07:38.221 }, 00:07:38.221 { 00:07:38.221 "nbd_device": "/dev/nbd5", 00:07:38.221 "bdev_name": "Nvme3n1" 00:07:38.221 } 00:07:38.221 ]' 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@51 -- # local i 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.221 09:45:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@41 -- # break 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@41 -- # break 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.483 09:45:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@41 -- # break 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.744 09:45:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@41 -- # break 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.006 09:45:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@41 -- # break 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@41 -- # break 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.301 09:45:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@65 -- # true 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@65 -- # count=0 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@122 -- # count=0 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@127 -- # return 0 00:07:39.563 09:45:28 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@12 -- # local i 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.563 09:45:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:39.824 /dev/nbd0 00:07:39.824 09:45:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.824 09:45:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.824 09:45:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:39.824 09:45:28 -- common/autotest_common.sh@867 -- # local i 00:07:39.824 09:45:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.824 09:45:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.824 09:45:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:39.824 09:45:28 -- common/autotest_common.sh@871 -- # break 00:07:39.824 09:45:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.824 09:45:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.824 09:45:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.824 1+0 records in 00:07:39.824 1+0 records out 00:07:39.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108498 s, 3.8 MB/s 00:07:39.824 09:45:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.824 09:45:28 -- common/autotest_common.sh@884 -- # size=4096 00:07:39.824 09:45:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.824 09:45:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.824 09:45:28 -- common/autotest_common.sh@887 -- # return 0 00:07:39.824 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.824 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.824 09:45:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:40.086 /dev/nbd1 00:07:40.086 09:45:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.086 09:45:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.086 09:45:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:40.086 09:45:28 -- common/autotest_common.sh@867 -- # local i 00:07:40.086 09:45:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.086 09:45:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.086 09:45:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:40.086 09:45:28 -- common/autotest_common.sh@871 -- # break 00:07:40.086 09:45:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.086 09:45:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.086 09:45:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.086 1+0 records in 00:07:40.086 1+0 records out 00:07:40.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725313 s, 5.6 MB/s 00:07:40.086 09:45:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.086 09:45:28 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.086 09:45:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.086 09:45:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.086 09:45:28 -- common/autotest_common.sh@887 -- # return 0 00:07:40.086 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.086 09:45:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.086 09:45:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:40.348 /dev/nbd10 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:40.348 09:45:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:40.348 09:45:29 -- common/autotest_common.sh@867 -- # local i 00:07:40.348 09:45:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.348 09:45:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.348 09:45:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:40.348 09:45:29 -- common/autotest_common.sh@871 -- # break 00:07:40.348 09:45:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.348 09:45:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.348 09:45:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.348 1+0 records in 00:07:40.348 1+0 records out 00:07:40.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124032 s, 3.3 MB/s 00:07:40.348 09:45:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.348 09:45:29 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.348 09:45:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.348 09:45:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.348 09:45:29 -- common/autotest_common.sh@887 -- # return 0 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:40.348 /dev/nbd11 00:07:40.348 09:45:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:40.609 09:45:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:40.609 09:45:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:40.609 09:45:29 -- common/autotest_common.sh@867 -- # local i 00:07:40.609 09:45:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.609 09:45:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.609 09:45:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:40.609 09:45:29 -- common/autotest_common.sh@871 -- # break 00:07:40.609 09:45:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.609 09:45:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.609 09:45:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.609 1+0 records in 00:07:40.609 1+0 records out 00:07:40.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585062 s, 7.0 MB/s 00:07:40.609 09:45:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.609 09:45:29 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.609 09:45:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.609 09:45:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.610 09:45:29 -- common/autotest_common.sh@887 -- # return 0 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:40.610 /dev/nbd12 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:40.610 09:45:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:40.610 09:45:29 -- common/autotest_common.sh@867 -- # local i 00:07:40.610 09:45:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.610 09:45:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.610 09:45:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:40.610 09:45:29 -- common/autotest_common.sh@871 -- # break 00:07:40.610 09:45:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.610 09:45:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.610 09:45:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.610 1+0 records in 00:07:40.610 1+0 records out 00:07:40.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100861 s, 4.1 MB/s 00:07:40.610 09:45:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.610 09:45:29 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.610 09:45:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.610 09:45:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.610 09:45:29 -- common/autotest_common.sh@887 -- # return 0 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.610 09:45:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:40.871 /dev/nbd13 00:07:40.871 09:45:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:40.871 09:45:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:40.871 09:45:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:40.871 09:45:29 -- common/autotest_common.sh@867 -- # local i 00:07:40.871 09:45:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.871 09:45:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.871 09:45:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:40.871 09:45:29 -- common/autotest_common.sh@871 -- # break 00:07:40.871 09:45:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.871 09:45:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.871 09:45:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.871 1+0 records in 00:07:40.872 1+0 records out 00:07:40.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100437 s, 4.1 MB/s 00:07:40.872 09:45:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.872 09:45:29 -- common/autotest_common.sh@884 -- # size=4096 00:07:40.872 09:45:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.872 09:45:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.872 09:45:29 -- common/autotest_common.sh@887 -- # return 0 00:07:40.872 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.872 09:45:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.872 09:45:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.872 09:45:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.872 09:45:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.133 09:45:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.133 { 00:07:41.133 "nbd_device": "/dev/nbd0", 00:07:41.133 "bdev_name": "Nvme0n1" 00:07:41.133 }, 00:07:41.133 { 00:07:41.133 "nbd_device": "/dev/nbd1", 00:07:41.134 "bdev_name": "Nvme1n1" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd10", 00:07:41.134 "bdev_name": "Nvme2n1" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd11", 00:07:41.134 "bdev_name": "Nvme2n2" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd12", 00:07:41.134 "bdev_name": "Nvme2n3" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd13", 00:07:41.134 "bdev_name": "Nvme3n1" 00:07:41.134 } 00:07:41.134 ]' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd0", 00:07:41.134 "bdev_name": "Nvme0n1" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd1", 00:07:41.134 "bdev_name": "Nvme1n1" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd10", 00:07:41.134 "bdev_name": "Nvme2n1" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd11", 00:07:41.134 "bdev_name": "Nvme2n2" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd12", 00:07:41.134 "bdev_name": "Nvme2n3" 00:07:41.134 }, 00:07:41.134 { 00:07:41.134 "nbd_device": "/dev/nbd13", 00:07:41.134 "bdev_name": "Nvme3n1" 00:07:41.134 } 00:07:41.134 ]' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.134 /dev/nbd1 00:07:41.134 /dev/nbd10 00:07:41.134 /dev/nbd11 00:07:41.134 /dev/nbd12 00:07:41.134 /dev/nbd13' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.134 /dev/nbd1 00:07:41.134 /dev/nbd10 00:07:41.134 /dev/nbd11 00:07:41.134 /dev/nbd12 00:07:41.134 /dev/nbd13' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@65 -- # count=6 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@95 -- # count=6 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:41.134 256+0 records in 00:07:41.134 256+0 records out 00:07:41.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109416 s, 95.8 MB/s 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.134 09:45:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.395 256+0 records in 00:07:41.395 256+0 records out 00:07:41.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216368 s, 4.8 MB/s 00:07:41.395 09:45:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.395 09:45:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.655 256+0 records in 00:07:41.655 256+0 records out 00:07:41.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235702 s, 4.4 MB/s 00:07:41.655 09:45:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.655 09:45:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:41.917 256+0 records in 00:07:41.917 256+0 records out 00:07:41.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231857 s, 4.5 MB/s 00:07:41.917 09:45:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.917 09:45:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:42.182 256+0 records in 00:07:42.182 256+0 records out 00:07:42.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230129 s, 4.6 MB/s 00:07:42.182 09:45:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.182 09:45:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:42.445 256+0 records in 00:07:42.445 256+0 records out 00:07:42.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228851 s, 4.6 MB/s 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:42.445 256+0 records in 00:07:42.445 256+0 records out 00:07:42.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219206 s, 4.8 MB/s 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.445 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@51 -- # local i 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@41 -- # break 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.706 09:45:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@41 -- # break 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.966 09:45:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:43.226 09:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@41 -- # break 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.227 09:45:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@41 -- # break 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@41 -- # break 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.487 09:45:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@41 -- # break 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.746 09:45:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@65 -- # true 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.004 09:45:32 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:44.004 09:45:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:44.262 malloc_lvol_verify 00:07:44.262 09:45:33 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:44.520 fc5432fa-a32d-4ba6-aa9f-0f4a6889ab61 00:07:44.520 09:45:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:44.520 e0becd41-c1bb-4664-b086-fac94ed669b8 00:07:44.520 09:45:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:44.778 /dev/nbd0 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:44.778 mke2fs 1.47.0 (5-Feb-2023) 00:07:44.778 Discarding device blocks: 0/4096 done 00:07:44.778 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:44.778 00:07:44.778 Allocating group tables: 0/1 done 00:07:44.778 Writing inode tables: 0/1 done 00:07:44.778 Creating journal (1024 blocks): done 00:07:44.778 Writing superblocks and filesystem accounting information: 0/1 done 00:07:44.778 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@51 -- # local i 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.778 09:45:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@41 -- # break 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:45.037 09:45:33 -- bdev/nbd_common.sh@147 -- # return 0 00:07:45.037 09:45:33 -- bdev/blockdev.sh@324 -- # killprocess 60459 00:07:45.037 09:45:33 -- common/autotest_common.sh@936 -- # '[' -z 60459 ']' 00:07:45.037 09:45:33 -- common/autotest_common.sh@940 -- # kill -0 60459 00:07:45.037 09:45:33 -- common/autotest_common.sh@941 -- # uname 00:07:45.037 09:45:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:45.037 09:45:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60459 00:07:45.037 killing process with pid 60459 00:07:45.037 09:45:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:45.037 09:45:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:45.037 09:45:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60459' 00:07:45.037 09:45:33 -- common/autotest_common.sh@955 -- # kill 60459 00:07:45.037 09:45:33 -- common/autotest_common.sh@960 -- # wait 60459 00:07:47.572 09:45:36 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:47.572 00:07:47.572 real 0m12.629s 00:07:47.572 user 0m15.967s 00:07:47.572 sys 0m3.649s 00:07:47.572 09:45:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.572 09:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:47.572 ************************************ 00:07:47.572 END TEST bdev_nbd 00:07:47.572 ************************************ 00:07:47.834 09:45:36 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:47.834 09:45:36 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:47.834 skipping fio tests on NVMe due to multi-ns failures. 00:07:47.834 09:45:36 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:47.834 09:45:36 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:47.834 09:45:36 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:47.834 09:45:36 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:47.834 09:45:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.834 09:45:36 -- common/autotest_common.sh@10 -- # set +x 00:07:47.834 ************************************ 00:07:47.834 START TEST bdev_verify 00:07:47.834 ************************************ 00:07:47.834 09:45:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:47.834 [2024-12-15 09:45:36.673649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.834 [2024-12-15 09:45:36.673760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:07:47.834 [2024-12-15 09:45:36.821819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.095 [2024-12-15 09:45:37.003539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.095 [2024-12-15 09:45:37.003639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.667 Running I/O for 5 seconds... 00:07:53.942 00:07:53.942 Latency(us) 00:07:53.942 [2024-12-15T09:45:42.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0xbd0bd 00:07:53.942 Nvme0n1 : 5.03 2804.75 10.96 0.00 0.00 45503.79 8065.97 73803.62 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:53.942 Nvme0n1 : 5.04 2812.75 10.99 0.00 0.00 45377.02 7914.73 71383.83 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0xa0000 00:07:53.942 Nvme1n1 : 5.04 2804.10 10.95 0.00 0.00 45471.30 8620.50 72593.72 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0xa0000 length 0xa0000 00:07:53.942 Nvme1n1 : 5.04 2811.15 10.98 0.00 0.00 45351.01 9124.63 68560.74 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0x80000 00:07:53.942 Nvme2n1 : 5.05 2813.75 10.99 0.00 0.00 45174.20 3705.30 59688.17 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x80000 length 0x80000 00:07:53.942 Nvme2n1 : 5.04 2815.72 11.00 0.00 0.00 45175.70 5116.85 59688.17 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0x80000 00:07:53.942 Nvme2n2 : 5.05 2819.82 11.01 0.00 0.00 45032.20 2785.28 53235.40 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x80000 length 0x80000 00:07:53.942 Nvme2n2 : 5.05 2821.27 11.02 0.00 0.00 45043.17 3478.45 58478.28 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0x80000 00:07:53.942 Nvme2n3 : 5.05 2819.33 11.01 0.00 0.00 45010.05 3075.15 53235.40 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x80000 length 0x80000 00:07:53.942 Nvme2n3 : 5.05 2827.80 11.05 0.00 0.00 44838.01 2823.09 57268.38 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x0 length 0x20000 00:07:53.942 Nvme3n1 : 5.05 2818.80 11.01 0.00 0.00 44980.86 3528.86 52025.50 00:07:53.942 [2024-12-15T09:45:42.958Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.942 Verification LBA range: start 0x20000 length 0x20000 00:07:53.942 Nvme3n1 : 5.06 2834.36 11.07 0.00 0.00 44686.10 1676.21 57671.68 00:07:53.942 [2024-12-15T09:45:42.958Z] =================================================================================================================== 00:07:53.942 [2024-12-15T09:45:42.958Z] Total : 33803.60 132.05 0.00 0.00 45135.86 1676.21 73803.62 00:08:15.934 00:08:15.934 real 0m28.027s 00:08:15.934 user 0m54.733s 00:08:15.934 sys 0m0.353s 00:08:15.934 09:46:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.934 09:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.934 ************************************ 00:08:15.934 END TEST bdev_verify 00:08:15.934 ************************************ 00:08:15.934 09:46:04 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:15.934 09:46:04 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:15.934 09:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.934 09:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:15.934 ************************************ 00:08:15.934 START TEST bdev_verify_big_io 00:08:15.934 ************************************ 00:08:15.934 09:46:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:15.934 [2024-12-15 09:46:04.768359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.934 [2024-12-15 09:46:04.768481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61051 ] 00:08:15.934 [2024-12-15 09:46:04.918272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.200 [2024-12-15 09:46:05.125432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.200 [2024-12-15 09:46:05.125519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.139 Running I/O for 5 seconds... 00:08:22.406 00:08:22.406 Latency(us) 00:08:22.406 [2024-12-15T09:46:11.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0x0 length 0xbd0b 00:08:22.406 Nvme0n1 : 5.37 292.00 18.25 0.00 0.00 429148.81 64124.46 706578.90 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:22.406 Nvme0n1 : 5.37 299.87 18.74 0.00 0.00 416603.80 63317.86 700126.13 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0x0 length 0xa000 00:08:22.406 Nvme1n1 : 5.37 291.89 18.24 0.00 0.00 423433.92 64124.46 648503.93 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0xa000 length 0xa000 00:08:22.406 Nvme1n1 : 5.37 299.75 18.73 0.00 0.00 411160.28 64124.46 648503.93 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0x0 length 0x8000 00:08:22.406 Nvme2n1 : 5.37 291.77 18.24 0.00 0.00 417654.42 65737.65 587202.56 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0x8000 length 0x8000 00:08:22.406 Nvme2n1 : 5.41 306.81 19.18 0.00 0.00 399950.34 32263.88 593655.34 00:08:22.406 [2024-12-15T09:46:11.422Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.406 Verification LBA range: start 0x0 length 0x8000 00:08:22.406 Nvme2n2 : 5.41 297.83 18.61 0.00 0.00 405034.65 31255.63 532353.97 00:08:22.406 [2024-12-15T09:46:11.423Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.407 Verification LBA range: start 0x8000 length 0x8000 00:08:22.407 Nvme2n2 : 5.41 306.72 19.17 0.00 0.00 394813.33 32667.18 545259.52 00:08:22.407 [2024-12-15T09:46:11.423Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.407 Verification LBA range: start 0x0 length 0x8000 00:08:22.407 Nvme2n3 : 5.42 305.95 19.12 0.00 0.00 390381.88 13409.67 471052.60 00:08:22.407 [2024-12-15T09:46:11.423Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.407 Verification LBA range: start 0x8000 length 0x8000 00:08:22.407 Nvme2n3 : 5.41 315.32 19.71 0.00 0.00 381048.02 2381.98 493637.32 00:08:22.407 [2024-12-15T09:46:11.423Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:22.407 Verification LBA range: start 0x0 length 0x2000 00:08:22.407 Nvme3n1 : 5.44 321.07 20.07 0.00 0.00 366775.47 8922.98 425883.18 00:08:22.407 [2024-12-15T09:46:11.423Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:22.407 Verification LBA range: start 0x2000 length 0x2000 00:08:22.407 Nvme3n1 : 5.42 322.39 20.15 0.00 0.00 367172.51 6251.13 435562.34 00:08:22.407 [2024-12-15T09:46:11.423Z] =================================================================================================================== 00:08:22.407 [2024-12-15T09:46:11.423Z] Total : 3651.38 228.21 0.00 0.00 399513.68 2381.98 706578.90 00:08:24.317 00:08:24.317 real 0m8.537s 00:08:24.317 user 0m15.941s 00:08:24.317 sys 0m0.296s 00:08:24.317 09:46:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.317 ************************************ 00:08:24.317 END TEST bdev_verify_big_io 00:08:24.317 ************************************ 00:08:24.317 09:46:13 -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 09:46:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.317 09:46:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:24.317 09:46:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.317 09:46:13 -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 ************************************ 00:08:24.317 START TEST bdev_write_zeroes 00:08:24.317 ************************************ 00:08:24.317 09:46:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.619 [2024-12-15 09:46:13.341773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.619 [2024-12-15 09:46:13.341883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:08:24.619 [2024-12-15 09:46:13.491325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.878 [2024-12-15 09:46:13.685519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.447 Running I/O for 1 seconds... 00:08:26.385 00:08:26.385 Latency(us) 00:08:26.385 [2024-12-15T09:46:15.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme0n1 : 1.01 7910.69 30.90 0.00 0.00 16121.68 5318.50 152446.82 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme1n1 : 1.01 8072.45 31.53 0.00 0.00 15782.73 9679.16 108083.99 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme2n1 : 1.02 8113.02 31.69 0.00 0.00 15690.80 7713.08 100421.32 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme2n2 : 1.02 8103.44 31.65 0.00 0.00 15630.47 7763.50 101227.91 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme2n3 : 1.02 8030.91 31.37 0.00 0.00 15750.06 7914.73 133895.09 00:08:26.385 [2024-12-15T09:46:15.401Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.385 Nvme3n1 : 1.02 8021.81 31.34 0.00 0.00 15736.96 8318.03 134701.69 00:08:26.385 [2024-12-15T09:46:15.401Z] =================================================================================================================== 00:08:26.385 [2024-12-15T09:46:15.401Z] Total : 48252.31 188.49 0.00 0.00 15783.92 5318.50 152446.82 00:08:27.327 00:08:27.327 real 0m2.948s 00:08:27.327 user 0m2.628s 00:08:27.327 sys 0m0.206s 00:08:27.327 09:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.327 ************************************ 00:08:27.327 END TEST bdev_write_zeroes 00:08:27.327 ************************************ 00:08:27.327 09:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:27.327 09:46:16 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.327 09:46:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:27.327 09:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.327 09:46:16 -- common/autotest_common.sh@10 -- # set +x 00:08:27.327 ************************************ 00:08:27.327 START TEST bdev_json_nonenclosed 00:08:27.327 ************************************ 00:08:27.327 09:46:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.588 [2024-12-15 09:46:16.370521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.588 [2024-12-15 09:46:16.370653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61214 ] 00:08:27.588 [2024-12-15 09:46:16.523386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.848 [2024-12-15 09:46:16.800070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.848 [2024-12-15 09:46:16.800333] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:27.848 [2024-12-15 09:46:16.800368] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.418 00:08:28.418 real 0m0.853s 00:08:28.418 user 0m0.609s 00:08:28.418 sys 0m0.134s 00:08:28.418 09:46:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.418 ************************************ 00:08:28.418 END TEST bdev_json_nonenclosed 00:08:28.418 ************************************ 00:08:28.418 09:46:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.418 09:46:17 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.418 09:46:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:28.418 09:46:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.418 09:46:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.418 ************************************ 00:08:28.418 START TEST bdev_json_nonarray 00:08:28.418 ************************************ 00:08:28.418 09:46:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.418 [2024-12-15 09:46:17.285713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.418 [2024-12-15 09:46:17.285857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61245 ] 00:08:28.679 [2024-12-15 09:46:17.442022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.679 [2024-12-15 09:46:17.690970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.679 [2024-12-15 09:46:17.691178] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:28.679 [2024-12-15 09:46:17.691206] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.250 00:08:29.250 real 0m0.793s 00:08:29.250 user 0m0.547s 00:08:29.250 sys 0m0.137s 00:08:29.250 ************************************ 00:08:29.250 END TEST bdev_json_nonarray 00:08:29.250 ************************************ 00:08:29.250 09:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.250 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.250 09:46:18 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:29.250 09:46:18 -- bdev/blockdev.sh@809 -- # cleanup 00:08:29.250 09:46:18 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:29.250 09:46:18 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:29.250 09:46:18 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:29.250 09:46:18 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:29.250 00:08:29.250 real 1m3.136s 00:08:29.250 user 1m44.019s 00:08:29.250 sys 0m6.134s 00:08:29.250 09:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.250 ************************************ 00:08:29.250 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.250 END TEST blockdev_nvme 00:08:29.250 ************************************ 00:08:29.250 09:46:18 -- spdk/autotest.sh@206 -- # uname -s 00:08:29.250 09:46:18 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:29.250 09:46:18 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:29.250 09:46:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.250 09:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.250 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.250 ************************************ 00:08:29.250 START TEST blockdev_nvme_gpt 00:08:29.250 ************************************ 00:08:29.250 09:46:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:29.250 * Looking for test storage... 00:08:29.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:29.250 09:46:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.250 09:46:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.250 09:46:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.250 09:46:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.250 09:46:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.250 09:46:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.250 09:46:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.250 09:46:18 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.250 09:46:18 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.250 09:46:18 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.250 09:46:18 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.512 09:46:18 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.512 09:46:18 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.512 09:46:18 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.512 09:46:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.512 09:46:18 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.512 09:46:18 -- scripts/common.sh@344 -- # : 1 00:08:29.512 09:46:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.512 09:46:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.512 09:46:18 -- scripts/common.sh@364 -- # decimal 1 00:08:29.512 09:46:18 -- scripts/common.sh@352 -- # local d=1 00:08:29.512 09:46:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.512 09:46:18 -- scripts/common.sh@354 -- # echo 1 00:08:29.512 09:46:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.512 09:46:18 -- scripts/common.sh@365 -- # decimal 2 00:08:29.512 09:46:18 -- scripts/common.sh@352 -- # local d=2 00:08:29.512 09:46:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.512 09:46:18 -- scripts/common.sh@354 -- # echo 2 00:08:29.512 09:46:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.512 09:46:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.512 09:46:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.512 09:46:18 -- scripts/common.sh@367 -- # return 0 00:08:29.512 09:46:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.512 09:46:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.512 --rc genhtml_branch_coverage=1 00:08:29.512 --rc genhtml_function_coverage=1 00:08:29.512 --rc genhtml_legend=1 00:08:29.512 --rc geninfo_all_blocks=1 00:08:29.512 --rc geninfo_unexecuted_blocks=1 00:08:29.512 00:08:29.512 ' 00:08:29.512 09:46:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.512 --rc genhtml_branch_coverage=1 00:08:29.512 --rc genhtml_function_coverage=1 00:08:29.512 --rc genhtml_legend=1 00:08:29.512 --rc geninfo_all_blocks=1 00:08:29.512 --rc geninfo_unexecuted_blocks=1 00:08:29.512 00:08:29.512 ' 00:08:29.512 09:46:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.512 --rc genhtml_branch_coverage=1 00:08:29.512 --rc genhtml_function_coverage=1 00:08:29.512 --rc genhtml_legend=1 00:08:29.512 --rc geninfo_all_blocks=1 00:08:29.512 --rc geninfo_unexecuted_blocks=1 00:08:29.512 00:08:29.512 ' 00:08:29.512 09:46:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.512 --rc genhtml_branch_coverage=1 00:08:29.512 --rc genhtml_function_coverage=1 00:08:29.512 --rc genhtml_legend=1 00:08:29.512 --rc geninfo_all_blocks=1 00:08:29.512 --rc geninfo_unexecuted_blocks=1 00:08:29.512 00:08:29.512 ' 00:08:29.512 09:46:18 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:29.512 09:46:18 -- bdev/nbd_common.sh@6 -- # set -e 00:08:29.512 09:46:18 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:29.512 09:46:18 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:29.512 09:46:18 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:29.512 09:46:18 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:29.512 09:46:18 -- bdev/blockdev.sh@18 -- # : 00:08:29.512 09:46:18 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:29.512 09:46:18 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:29.512 09:46:18 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:29.512 09:46:18 -- bdev/blockdev.sh@672 -- # uname -s 00:08:29.512 09:46:18 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:29.512 09:46:18 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:29.512 09:46:18 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:29.512 09:46:18 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:29.512 09:46:18 -- bdev/blockdev.sh@682 -- # dek= 00:08:29.512 09:46:18 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:29.512 09:46:18 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:29.512 09:46:18 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:29.512 09:46:18 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:29.512 09:46:18 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:29.512 09:46:18 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:29.512 09:46:18 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61328 00:08:29.512 09:46:18 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:29.512 09:46:18 -- bdev/blockdev.sh@47 -- # waitforlisten 61328 00:08:29.512 09:46:18 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:29.512 09:46:18 -- common/autotest_common.sh@829 -- # '[' -z 61328 ']' 00:08:29.512 09:46:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.512 09:46:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.512 09:46:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.512 09:46:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.512 09:46:18 -- common/autotest_common.sh@10 -- # set +x 00:08:29.512 [2024-12-15 09:46:18.367603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.512 [2024-12-15 09:46:18.367751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61328 ] 00:08:29.512 [2024-12-15 09:46:18.521775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.774 [2024-12-15 09:46:18.751749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:29.774 [2024-12-15 09:46:18.751982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.161 09:46:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.161 09:46:19 -- common/autotest_common.sh@862 -- # return 0 00:08:31.161 09:46:19 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:31.161 09:46:19 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:31.161 09:46:19 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:31.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.422 Waiting for block devices as requested 00:08:31.422 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.683 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.683 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:31.683 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.977 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:36.977 09:46:25 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:36.977 09:46:25 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:36.977 09:46:25 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:36.977 09:46:25 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:36.977 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.977 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:36.977 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:36.977 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:36.977 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.977 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:36.978 09:46:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:36.978 09:46:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:36.978 09:46:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:36.978 09:46:25 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:36.978 09:46:25 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:36.978 09:46:25 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:36.978 09:46:25 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:36.978 09:46:25 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:36.978 09:46:25 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:36.978 09:46:25 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:36.978 09:46:25 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:36.978 BYT; 00:08:36.978 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:36.978 09:46:25 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:36.978 BYT; 00:08:36.978 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:36.978 09:46:25 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:36.978 09:46:25 -- bdev/blockdev.sh@114 -- # break 00:08:36.978 09:46:25 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:36.978 09:46:25 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:36.978 09:46:25 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:36.978 09:46:25 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:36.978 09:46:25 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:36.978 09:46:25 -- scripts/common.sh@410 -- # local spdk_guid 00:08:36.978 09:46:25 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:36.978 09:46:25 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:36.978 09:46:25 -- scripts/common.sh@415 -- # IFS='()' 00:08:36.978 09:46:25 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:36.978 09:46:25 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:36.978 09:46:25 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:36.978 09:46:25 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:36.978 09:46:25 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:36.978 09:46:25 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:36.978 09:46:25 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:36.978 09:46:25 -- scripts/common.sh@422 -- # local spdk_guid 00:08:36.978 09:46:25 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:36.978 09:46:25 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:36.978 09:46:25 -- scripts/common.sh@427 -- # IFS='()' 00:08:36.978 09:46:25 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:36.978 09:46:25 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:36.978 09:46:25 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:36.978 09:46:25 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:36.978 09:46:25 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:36.978 09:46:25 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:36.978 09:46:25 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:38.352 The operation has completed successfully. 00:08:38.352 09:46:26 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:39.286 The operation has completed successfully. 00:08:39.286 09:46:27 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.851 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.851 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.851 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.109 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.109 09:46:28 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:40.109 09:46:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 [] 00:08:40.109 09:46:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.109 09:46:28 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:40.109 09:46:28 -- bdev/blockdev.sh@79 -- # local json 00:08:40.109 09:46:28 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:40.109 09:46:28 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.109 09:46:29 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:40.109 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.368 09:46:29 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:40.368 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.368 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.368 09:46:29 -- bdev/blockdev.sh@738 -- # cat 00:08:40.368 09:46:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:40.368 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.368 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.368 09:46:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:40.368 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.368 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.368 09:46:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:40.368 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.368 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.368 09:46:29 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:40.368 09:46:29 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:40.368 09:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.368 09:46:29 -- common/autotest_common.sh@10 -- # set +x 00:08:40.368 09:46:29 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:40.627 09:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.627 09:46:29 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:40.628 09:46:29 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "104502de-5d58-4b8f-a313-cd807018ff6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "104502de-5d58-4b8f-a313-cd807018ff6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5439cd76-d41f-493d-a9d3-1ca6db18416b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5439cd76-d41f-493d-a9d3-1ca6db18416b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b5369053-bbbc-47d2-9709-285f9ffecbe2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b5369053-bbbc-47d2-9709-285f9ffecbe2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "29158951-296b-4bc8-962a-2b67e945abe9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "29158951-296b-4bc8-962a-2b67e945abe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3e60f497-13cd-43cd-8f5a-432886cc16e1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3e60f497-13cd-43cd-8f5a-432886cc16e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:40.628 09:46:29 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:40.628 09:46:29 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:40.628 09:46:29 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:40.628 09:46:29 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:40.628 09:46:29 -- bdev/blockdev.sh@752 -- # killprocess 61328 00:08:40.628 09:46:29 -- common/autotest_common.sh@936 -- # '[' -z 61328 ']' 00:08:40.628 09:46:29 -- common/autotest_common.sh@940 -- # kill -0 61328 00:08:40.628 09:46:29 -- common/autotest_common.sh@941 -- # uname 00:08:40.628 09:46:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.628 09:46:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61328 00:08:40.628 09:46:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.628 09:46:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.628 killing process with pid 61328 00:08:40.628 09:46:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61328' 00:08:40.628 09:46:29 -- common/autotest_common.sh@955 -- # kill 61328 00:08:40.628 09:46:29 -- common/autotest_common.sh@960 -- # wait 61328 00:08:42.001 09:46:30 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:42.001 09:46:30 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:42.001 09:46:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:42.001 09:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.001 09:46:30 -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 ************************************ 00:08:42.001 START TEST bdev_hello_world 00:08:42.001 ************************************ 00:08:42.001 09:46:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:42.001 [2024-12-15 09:46:30.735073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.001 [2024-12-15 09:46:30.735184] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61991 ] 00:08:42.001 [2024-12-15 09:46:30.880983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.258 [2024-12-15 09:46:31.084876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.824 [2024-12-15 09:46:31.629400] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:42.824 [2024-12-15 09:46:31.629448] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:42.824 [2024-12-15 09:46:31.629466] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:42.824 [2024-12-15 09:46:31.631959] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:42.824 [2024-12-15 09:46:31.632332] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:42.824 [2024-12-15 09:46:31.632362] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:42.824 [2024-12-15 09:46:31.632547] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:42.824 00:08:42.824 [2024-12-15 09:46:31.632579] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:43.436 00:08:43.436 real 0m1.638s 00:08:43.436 user 0m1.333s 00:08:43.436 sys 0m0.198s 00:08:43.436 09:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.436 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.436 ************************************ 00:08:43.436 END TEST bdev_hello_world 00:08:43.436 ************************************ 00:08:43.436 09:46:32 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:43.436 09:46:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.436 09:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.436 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.436 ************************************ 00:08:43.436 START TEST bdev_bounds 00:08:43.436 ************************************ 00:08:43.436 09:46:32 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:43.436 09:46:32 -- bdev/blockdev.sh@288 -- # bdevio_pid=62022 00:08:43.436 09:46:32 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.436 Process bdevio pid: 62022 00:08:43.436 09:46:32 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62022' 00:08:43.436 09:46:32 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:43.436 09:46:32 -- bdev/blockdev.sh@291 -- # waitforlisten 62022 00:08:43.436 09:46:32 -- common/autotest_common.sh@829 -- # '[' -z 62022 ']' 00:08:43.436 09:46:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.436 09:46:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.436 09:46:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.436 09:46:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.436 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.436 [2024-12-15 09:46:32.416377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.436 [2024-12-15 09:46:32.416777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62022 ] 00:08:43.694 [2024-12-15 09:46:32.564161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.953 [2024-12-15 09:46:32.720341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.953 [2024-12-15 09:46:32.720458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.953 [2024-12-15 09:46:32.720472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.524 09:46:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.524 09:46:33 -- common/autotest_common.sh@862 -- # return 0 00:08:44.524 09:46:33 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:44.524 I/O targets: 00:08:44.524 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:44.524 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:44.524 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:44.524 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.524 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.524 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:44.524 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:44.524 00:08:44.524 00:08:44.524 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.524 http://cunit.sourceforge.net/ 00:08:44.524 00:08:44.524 00:08:44.524 Suite: bdevio tests on: Nvme3n1 00:08:44.524 Test: blockdev write read block ...passed 00:08:44.524 Test: blockdev write zeroes read block ...passed 00:08:44.524 Test: blockdev write zeroes read no split ...passed 00:08:44.524 Test: blockdev write zeroes read split ...passed 00:08:44.524 Test: blockdev write zeroes read split partial ...passed 00:08:44.524 Test: blockdev reset ...[2024-12-15 09:46:33.394355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:44.524 [2024-12-15 09:46:33.399320] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.524 passed 00:08:44.524 Test: blockdev write read 8 blocks ...passed 00:08:44.524 Test: blockdev write read size > 128k ...passed 00:08:44.524 Test: blockdev write read invalid size ...passed 00:08:44.524 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.524 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.524 Test: blockdev write read max offset ...passed 00:08:44.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.524 Test: blockdev writev readv 8 blocks ...passed 00:08:44.524 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.524 Test: blockdev writev readv block ...passed 00:08:44.524 Test: blockdev writev readv size > 128k ...passed 00:08:44.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.524 Test: blockdev comparev and writev ...[2024-12-15 09:46:33.414284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26920a000 len:0x1000 00:08:44.524 [2024-12-15 09:46:33.414337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.524 passed 00:08:44.524 Test: blockdev nvme passthru rw ...passed 00:08:44.524 Test: blockdev nvme passthru vendor specific ...passed 00:08:44.524 Test: blockdev nvme admin passthru ...[2024-12-15 09:46:33.415871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.524 [2024-12-15 09:46:33.415899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.524 passed 00:08:44.524 Test: blockdev copy ...passed 00:08:44.524 Suite: bdevio tests on: Nvme2n3 00:08:44.524 Test: blockdev write read block ...passed 00:08:44.524 Test: blockdev write zeroes read block ...passed 00:08:44.524 Test: blockdev write zeroes read no split ...passed 00:08:44.524 Test: blockdev write zeroes read split ...passed 00:08:44.524 Test: blockdev write zeroes read split partial ...passed 00:08:44.524 Test: blockdev reset ...[2024-12-15 09:46:33.478274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:44.524 [2024-12-15 09:46:33.484552] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.524 passed 00:08:44.524 Test: blockdev write read 8 blocks ...passed 00:08:44.524 Test: blockdev write read size > 128k ...passed 00:08:44.524 Test: blockdev write read invalid size ...passed 00:08:44.524 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.524 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.524 Test: blockdev write read max offset ...passed 00:08:44.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.524 Test: blockdev writev readv 8 blocks ...passed 00:08:44.524 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.524 Test: blockdev writev readv block ...passed 00:08:44.524 Test: blockdev writev readv size > 128k ...passed 00:08:44.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.524 Test: blockdev comparev and writev ...[2024-12-15 09:46:33.494690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274704000 len:0x1000 00:08:44.524 [2024-12-15 09:46:33.494740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.524 passed 00:08:44.524 Test: blockdev nvme passthru rw ...passed 00:08:44.524 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:46:33.496117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.524 [2024-12-15 09:46:33.496141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.524 passed 00:08:44.524 Test: blockdev nvme admin passthru ...passed 00:08:44.524 Test: blockdev copy ...passed 00:08:44.524 Suite: bdevio tests on: Nvme2n2 00:08:44.524 Test: blockdev write read block ...passed 00:08:44.524 Test: blockdev write zeroes read block ...passed 00:08:44.524 Test: blockdev write zeroes read no split ...passed 00:08:44.524 Test: blockdev write zeroes read split ...passed 00:08:44.785 Test: blockdev write zeroes read split partial ...passed 00:08:44.785 Test: blockdev reset ...[2024-12-15 09:46:33.549531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:44.785 [2024-12-15 09:46:33.556548] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.785 passed 00:08:44.785 Test: blockdev write read 8 blocks ...passed 00:08:44.785 Test: blockdev write read size > 128k ...passed 00:08:44.785 Test: blockdev write read invalid size ...passed 00:08:44.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.785 Test: blockdev write read max offset ...passed 00:08:44.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.785 Test: blockdev writev readv 8 blocks ...passed 00:08:44.785 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.785 Test: blockdev writev readv block ...passed 00:08:44.785 Test: blockdev writev readv size > 128k ...passed 00:08:44.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.785 Test: blockdev comparev and writev ...[2024-12-15 09:46:33.564308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274704000 len:0x1000 00:08:44.785 [2024-12-15 09:46:33.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme passthru rw ...passed 00:08:44.785 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:46:33.566244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme admin passthru ...[2024-12-15 09:46:33.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev copy ...passed 00:08:44.785 Suite: bdevio tests on: Nvme2n1 00:08:44.785 Test: blockdev write read block ...passed 00:08:44.785 Test: blockdev write zeroes read block ...passed 00:08:44.785 Test: blockdev write zeroes read no split ...passed 00:08:44.785 Test: blockdev write zeroes read split ...passed 00:08:44.785 Test: blockdev write zeroes read split partial ...passed 00:08:44.785 Test: blockdev reset ...[2024-12-15 09:46:33.622016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:44.785 passed 00:08:44.785 Test: blockdev write read 8 blocks ...[2024-12-15 09:46:33.626569] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.785 passed 00:08:44.785 Test: blockdev write read size > 128k ...passed 00:08:44.785 Test: blockdev write read invalid size ...passed 00:08:44.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.785 Test: blockdev write read max offset ...passed 00:08:44.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.785 Test: blockdev writev readv 8 blocks ...passed 00:08:44.785 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.785 Test: blockdev writev readv block ...passed 00:08:44.785 Test: blockdev writev readv size > 128k ...passed 00:08:44.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.785 Test: blockdev comparev and writev ...[2024-12-15 09:46:33.646220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c23c000 len:0x1000 00:08:44.785 [2024-12-15 09:46:33.646276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme passthru rw ...passed 00:08:44.785 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:46:33.649114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.785 [2024-12-15 09:46:33.649148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme admin passthru ...passed 00:08:44.785 Test: blockdev copy ...passed 00:08:44.785 Suite: bdevio tests on: Nvme1n1 00:08:44.785 Test: blockdev write read block ...passed 00:08:44.785 Test: blockdev write zeroes read block ...passed 00:08:44.785 Test: blockdev write zeroes read no split ...passed 00:08:44.785 Test: blockdev write zeroes read split ...passed 00:08:44.785 Test: blockdev write zeroes read split partial ...passed 00:08:44.785 Test: blockdev reset ...[2024-12-15 09:46:33.703242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:44.785 passed 00:08:44.785 Test: blockdev write read 8 blocks ...[2024-12-15 09:46:33.706725] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.785 passed 00:08:44.785 Test: blockdev write read size > 128k ...passed 00:08:44.785 Test: blockdev write read invalid size ...passed 00:08:44.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.785 Test: blockdev write read max offset ...passed 00:08:44.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.785 Test: blockdev writev readv 8 blocks ...passed 00:08:44.785 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.785 Test: blockdev writev readv block ...passed 00:08:44.785 Test: blockdev writev readv size > 128k ...passed 00:08:44.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.785 Test: blockdev comparev and writev ...[2024-12-15 09:46:33.725631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c238000 len:0x1000 00:08:44.785 [2024-12-15 09:46:33.725671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme passthru rw ...passed 00:08:44.785 Test: blockdev nvme passthru vendor specific ...[2024-12-15 09:46:33.728201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:44.785 [2024-12-15 09:46:33.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:44.785 passed 00:08:44.785 Test: blockdev nvme admin passthru ...passed 00:08:44.785 Test: blockdev copy ...passed 00:08:44.785 Suite: bdevio tests on: Nvme0n1p2 00:08:44.785 Test: blockdev write read block ...passed 00:08:44.785 Test: blockdev write zeroes read block ...passed 00:08:44.785 Test: blockdev write zeroes read no split ...passed 00:08:44.785 Test: blockdev write zeroes read split ...passed 00:08:44.785 Test: blockdev write zeroes read split partial ...passed 00:08:44.785 Test: blockdev reset ...[2024-12-15 09:46:33.788361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:44.785 [2024-12-15 09:46:33.792021] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.785 passed 00:08:44.785 Test: blockdev write read 8 blocks ...passed 00:08:44.785 Test: blockdev write read size > 128k ...passed 00:08:44.785 Test: blockdev write read invalid size ...passed 00:08:44.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.785 Test: blockdev write read max offset ...passed 00:08:44.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:45.045 Test: blockdev writev readv 8 blocks ...passed 00:08:45.045 Test: blockdev writev readv 30 x 1block ...passed 00:08:45.045 Test: blockdev writev readv block ...passed 00:08:45.045 Test: blockdev writev readv size > 128k ...passed 00:08:45.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:45.045 Test: blockdev comparev and writev ...passed 00:08:45.045 Test: blockdev nvme passthru rw ...passed 00:08:45.045 Test: blockdev nvme passthru vendor specific ...passed 00:08:45.045 Test: blockdev nvme admin passthru ...passed 00:08:45.046 Test: blockdev copy ...[2024-12-15 09:46:33.807926] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:45.046 separate metadata which is not supported yet. 00:08:45.046 passed 00:08:45.046 Suite: bdevio tests on: Nvme0n1p1 00:08:45.046 Test: blockdev write read block ...passed 00:08:45.046 Test: blockdev write zeroes read block ...passed 00:08:45.046 Test: blockdev write zeroes read no split ...passed 00:08:45.046 Test: blockdev write zeroes read split ...passed 00:08:45.046 Test: blockdev write zeroes read split partial ...passed 00:08:45.046 Test: blockdev reset ...[2024-12-15 09:46:33.861517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:45.046 [2024-12-15 09:46:33.865159] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:45.046 passed 00:08:45.046 Test: blockdev write read 8 blocks ...passed 00:08:45.046 Test: blockdev write read size > 128k ...passed 00:08:45.046 Test: blockdev write read invalid size ...passed 00:08:45.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:45.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:45.046 Test: blockdev write read max offset ...passed 00:08:45.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:45.046 Test: blockdev writev readv 8 blocks ...passed 00:08:45.046 Test: blockdev writev readv 30 x 1block ...passed 00:08:45.046 Test: blockdev writev readv block ...passed 00:08:45.046 Test: blockdev writev readv size > 128k ...passed 00:08:45.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:45.046 Test: blockdev comparev and writev ...passed 00:08:45.046 Test: blockdev nvme passthru rw ...passed 00:08:45.046 Test: blockdev nvme passthru vendor specific ...passed 00:08:45.046 Test: blockdev nvme admin passthru ...passed 00:08:45.046 Test: blockdev copy ...[2024-12-15 09:46:33.880551] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:45.046 separate metadata which is not supported yet. 00:08:45.046 passed 00:08:45.046 00:08:45.046 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.046 suites 7 7 n/a 0 0 00:08:45.046 tests 161 161 161 0 0 00:08:45.046 asserts 1006 1006 1006 0 n/a 00:08:45.046 00:08:45.046 Elapsed time = 1.389 seconds 00:08:45.046 0 00:08:45.046 09:46:33 -- bdev/blockdev.sh@293 -- # killprocess 62022 00:08:45.046 09:46:33 -- common/autotest_common.sh@936 -- # '[' -z 62022 ']' 00:08:45.046 09:46:33 -- common/autotest_common.sh@940 -- # kill -0 62022 00:08:45.046 09:46:33 -- common/autotest_common.sh@941 -- # uname 00:08:45.046 09:46:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.046 09:46:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62022 00:08:45.046 killing process with pid 62022 00:08:45.046 09:46:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.046 09:46:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.046 09:46:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62022' 00:08:45.046 09:46:33 -- common/autotest_common.sh@955 -- # kill 62022 00:08:45.046 09:46:33 -- common/autotest_common.sh@960 -- # wait 62022 00:08:45.616 ************************************ 00:08:45.616 END TEST bdev_bounds 00:08:45.616 ************************************ 00:08:45.616 09:46:34 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:45.616 00:08:45.616 real 0m2.234s 00:08:45.616 user 0m5.462s 00:08:45.616 sys 0m0.291s 00:08:45.616 09:46:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.616 09:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:45.877 09:46:34 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:45.877 09:46:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:45.877 09:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.877 09:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:45.877 ************************************ 00:08:45.877 START TEST bdev_nbd 00:08:45.877 ************************************ 00:08:45.877 09:46:34 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:45.877 09:46:34 -- bdev/blockdev.sh@298 -- # uname -s 00:08:45.877 09:46:34 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:45.877 09:46:34 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.877 09:46:34 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.877 09:46:34 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.877 09:46:34 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:45.877 09:46:34 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:45.877 09:46:34 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:45.877 09:46:34 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:45.877 09:46:34 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:45.877 09:46:34 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:45.877 09:46:34 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.877 09:46:34 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:45.877 09:46:34 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.877 09:46:34 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:45.877 09:46:34 -- bdev/blockdev.sh@316 -- # nbd_pid=62081 00:08:45.877 09:46:34 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:45.877 09:46:34 -- bdev/blockdev.sh@318 -- # waitforlisten 62081 /var/tmp/spdk-nbd.sock 00:08:45.877 09:46:34 -- common/autotest_common.sh@829 -- # '[' -z 62081 ']' 00:08:45.877 09:46:34 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.877 09:46:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:45.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:45.877 09:46:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.877 09:46:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:45.877 09:46:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.877 09:46:34 -- common/autotest_common.sh@10 -- # set +x 00:08:45.877 [2024-12-15 09:46:34.712428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.877 [2024-12-15 09:46:34.712521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.877 [2024-12-15 09:46:34.856063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.138 [2024-12-15 09:46:35.021982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.706 09:46:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.706 09:46:35 -- common/autotest_common.sh@862 -- # return 0 00:08:46.706 09:46:35 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@24 -- # local i 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.706 09:46:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:46.967 09:46:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:46.967 09:46:35 -- common/autotest_common.sh@867 -- # local i 00:08:46.967 09:46:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:46.967 09:46:35 -- common/autotest_common.sh@871 -- # break 00:08:46.967 09:46:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.967 1+0 records in 00:08:46.967 1+0 records out 00:08:46.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113509 s, 3.6 MB/s 00:08:46.967 09:46:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.967 09:46:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:46.967 09:46:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.967 09:46:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.967 09:46:35 -- common/autotest_common.sh@887 -- # return 0 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:46.967 09:46:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:46.967 09:46:35 -- common/autotest_common.sh@867 -- # local i 00:08:46.967 09:46:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:46.967 09:46:35 -- common/autotest_common.sh@871 -- # break 00:08:46.967 09:46:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.967 09:46:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.967 1+0 records in 00:08:46.967 1+0 records out 00:08:46.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114386 s, 3.6 MB/s 00:08:46.967 09:46:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.967 09:46:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:46.967 09:46:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.967 09:46:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.967 09:46:35 -- common/autotest_common.sh@887 -- # return 0 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.967 09:46:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:47.229 09:46:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:47.229 09:46:36 -- common/autotest_common.sh@867 -- # local i 00:08:47.229 09:46:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.229 09:46:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.229 09:46:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:47.229 09:46:36 -- common/autotest_common.sh@871 -- # break 00:08:47.229 09:46:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.229 09:46:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.229 09:46:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.229 1+0 records in 00:08:47.229 1+0 records out 00:08:47.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718 s, 5.7 MB/s 00:08:47.229 09:46:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.229 09:46:36 -- common/autotest_common.sh@884 -- # size=4096 00:08:47.229 09:46:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.229 09:46:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.229 09:46:36 -- common/autotest_common.sh@887 -- # return 0 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:47.229 09:46:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:47.490 09:46:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:47.490 09:46:36 -- common/autotest_common.sh@867 -- # local i 00:08:47.490 09:46:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.490 09:46:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.490 09:46:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:47.490 09:46:36 -- common/autotest_common.sh@871 -- # break 00:08:47.490 09:46:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.490 09:46:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.490 09:46:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.490 1+0 records in 00:08:47.490 1+0 records out 00:08:47.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105872 s, 3.9 MB/s 00:08:47.490 09:46:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.490 09:46:36 -- common/autotest_common.sh@884 -- # size=4096 00:08:47.490 09:46:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.490 09:46:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.490 09:46:36 -- common/autotest_common.sh@887 -- # return 0 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:47.490 09:46:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:47.751 09:46:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:47.751 09:46:36 -- common/autotest_common.sh@867 -- # local i 00:08:47.751 09:46:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.751 09:46:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.751 09:46:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:47.751 09:46:36 -- common/autotest_common.sh@871 -- # break 00:08:47.751 09:46:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.751 09:46:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.751 09:46:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.751 1+0 records in 00:08:47.751 1+0 records out 00:08:47.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011157 s, 3.7 MB/s 00:08:47.751 09:46:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.751 09:46:36 -- common/autotest_common.sh@884 -- # size=4096 00:08:47.751 09:46:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.751 09:46:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.751 09:46:36 -- common/autotest_common.sh@887 -- # return 0 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:47.751 09:46:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:48.013 09:46:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:48.013 09:46:36 -- common/autotest_common.sh@867 -- # local i 00:08:48.013 09:46:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.013 09:46:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.013 09:46:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:48.013 09:46:36 -- common/autotest_common.sh@871 -- # break 00:08:48.013 09:46:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.013 09:46:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.013 09:46:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.013 1+0 records in 00:08:48.013 1+0 records out 00:08:48.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469477 s, 8.7 MB/s 00:08:48.013 09:46:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.013 09:46:36 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.013 09:46:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.013 09:46:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.013 09:46:36 -- common/autotest_common.sh@887 -- # return 0 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.013 09:46:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:48.274 09:46:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:48.274 09:46:37 -- common/autotest_common.sh@867 -- # local i 00:08:48.274 09:46:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:48.274 09:46:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:48.274 09:46:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:48.274 09:46:37 -- common/autotest_common.sh@871 -- # break 00:08:48.274 09:46:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:48.274 09:46:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:48.274 09:46:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.274 1+0 records in 00:08:48.274 1+0 records out 00:08:48.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118474 s, 3.5 MB/s 00:08:48.274 09:46:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.274 09:46:37 -- common/autotest_common.sh@884 -- # size=4096 00:08:48.274 09:46:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.274 09:46:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:48.274 09:46:37 -- common/autotest_common.sh@887 -- # return 0 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd0", 00:08:48.274 "bdev_name": "Nvme0n1p1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd1", 00:08:48.274 "bdev_name": "Nvme0n1p2" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd2", 00:08:48.274 "bdev_name": "Nvme1n1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd3", 00:08:48.274 "bdev_name": "Nvme2n1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd4", 00:08:48.274 "bdev_name": "Nvme2n2" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd5", 00:08:48.274 "bdev_name": "Nvme2n3" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd6", 00:08:48.274 "bdev_name": "Nvme3n1" 00:08:48.274 } 00:08:48.274 ]' 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd0", 00:08:48.274 "bdev_name": "Nvme0n1p1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd1", 00:08:48.274 "bdev_name": "Nvme0n1p2" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd2", 00:08:48.274 "bdev_name": "Nvme1n1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd3", 00:08:48.274 "bdev_name": "Nvme2n1" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd4", 00:08:48.274 "bdev_name": "Nvme2n2" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd5", 00:08:48.274 "bdev_name": "Nvme2n3" 00:08:48.274 }, 00:08:48.274 { 00:08:48.274 "nbd_device": "/dev/nbd6", 00:08:48.274 "bdev_name": "Nvme3n1" 00:08:48.274 } 00:08:48.274 ]' 00:08:48.274 09:46:37 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@51 -- # local i 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@41 -- # break 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.536 09:46:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@41 -- # break 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.797 09:46:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@41 -- # break 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.059 09:46:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@41 -- # break 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.317 09:46:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@41 -- # break 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.318 09:46:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@41 -- # break 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.576 09:46:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@41 -- # break 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.835 09:46:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@65 -- # true 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@122 -- # count=0 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@127 -- # return 0 00:08:50.093 09:46:38 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@12 -- # local i 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.093 09:46:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:50.093 /dev/nbd0 00:08:50.093 09:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.093 09:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.093 09:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:50.093 09:46:39 -- common/autotest_common.sh@867 -- # local i 00:08:50.093 09:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.093 09:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.093 09:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:50.093 09:46:39 -- common/autotest_common.sh@871 -- # break 00:08:50.093 09:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.093 09:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.093 09:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.093 1+0 records in 00:08:50.093 1+0 records out 00:08:50.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358798 s, 11.4 MB/s 00:08:50.093 09:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.093 09:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.093 09:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.093 09:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.093 09:46:39 -- common/autotest_common.sh@887 -- # return 0 00:08:50.093 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.093 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.093 09:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:50.351 /dev/nbd1 00:08:50.351 09:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:50.351 09:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:50.351 09:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:50.351 09:46:39 -- common/autotest_common.sh@867 -- # local i 00:08:50.351 09:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.351 09:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.351 09:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:50.351 09:46:39 -- common/autotest_common.sh@871 -- # break 00:08:50.351 09:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.351 09:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.351 09:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.351 1+0 records in 00:08:50.351 1+0 records out 00:08:50.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027376 s, 15.0 MB/s 00:08:50.351 09:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.351 09:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.351 09:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.351 09:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.351 09:46:39 -- common/autotest_common.sh@887 -- # return 0 00:08:50.351 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.351 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.351 09:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:50.609 /dev/nbd10 00:08:50.609 09:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:50.609 09:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:50.609 09:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:50.609 09:46:39 -- common/autotest_common.sh@867 -- # local i 00:08:50.609 09:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.609 09:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.609 09:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:50.609 09:46:39 -- common/autotest_common.sh@871 -- # break 00:08:50.609 09:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.609 09:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.609 09:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.609 1+0 records in 00:08:50.609 1+0 records out 00:08:50.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317631 s, 12.9 MB/s 00:08:50.609 09:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.609 09:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.609 09:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.609 09:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.609 09:46:39 -- common/autotest_common.sh@887 -- # return 0 00:08:50.609 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.609 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.609 09:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:50.867 /dev/nbd11 00:08:50.867 09:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:50.867 09:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:50.867 09:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:50.867 09:46:39 -- common/autotest_common.sh@867 -- # local i 00:08:50.867 09:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.867 09:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.867 09:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:50.867 09:46:39 -- common/autotest_common.sh@871 -- # break 00:08:50.867 09:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.867 09:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.867 09:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.867 1+0 records in 00:08:50.867 1+0 records out 00:08:50.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454022 s, 9.0 MB/s 00:08:50.867 09:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.867 09:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.867 09:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.867 09:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.867 09:46:39 -- common/autotest_common.sh@887 -- # return 0 00:08:50.867 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.867 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.867 09:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:51.127 /dev/nbd12 00:08:51.127 09:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:51.127 09:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:51.127 09:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:51.127 09:46:39 -- common/autotest_common.sh@867 -- # local i 00:08:51.127 09:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.127 09:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.127 09:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:51.127 09:46:39 -- common/autotest_common.sh@871 -- # break 00:08:51.127 09:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.127 09:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.127 09:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.127 1+0 records in 00:08:51.127 1+0 records out 00:08:51.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380261 s, 10.8 MB/s 00:08:51.127 09:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.127 09:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:51.127 09:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.127 09:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.127 09:46:39 -- common/autotest_common.sh@887 -- # return 0 00:08:51.127 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.127 09:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:51.127 09:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:51.127 /dev/nbd13 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:51.386 09:46:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:51.386 09:46:40 -- common/autotest_common.sh@867 -- # local i 00:08:51.386 09:46:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:51.386 09:46:40 -- common/autotest_common.sh@871 -- # break 00:08:51.386 09:46:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.386 1+0 records in 00:08:51.386 1+0 records out 00:08:51.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447397 s, 9.2 MB/s 00:08:51.386 09:46:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.386 09:46:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:51.386 09:46:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.386 09:46:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.386 09:46:40 -- common/autotest_common.sh@887 -- # return 0 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:51.386 /dev/nbd14 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:51.386 09:46:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:51.386 09:46:40 -- common/autotest_common.sh@867 -- # local i 00:08:51.386 09:46:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:51.386 09:46:40 -- common/autotest_common.sh@871 -- # break 00:08:51.386 09:46:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.386 09:46:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.386 1+0 records in 00:08:51.386 1+0 records out 00:08:51.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435667 s, 9.4 MB/s 00:08:51.386 09:46:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.386 09:46:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:51.386 09:46:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.386 09:46:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.386 09:46:40 -- common/autotest_common.sh@887 -- # return 0 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.386 09:46:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd0", 00:08:51.644 "bdev_name": "Nvme0n1p1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd1", 00:08:51.644 "bdev_name": "Nvme0n1p2" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd10", 00:08:51.644 "bdev_name": "Nvme1n1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd11", 00:08:51.644 "bdev_name": "Nvme2n1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd12", 00:08:51.644 "bdev_name": "Nvme2n2" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd13", 00:08:51.644 "bdev_name": "Nvme2n3" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd14", 00:08:51.644 "bdev_name": "Nvme3n1" 00:08:51.644 } 00:08:51.644 ]' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd0", 00:08:51.644 "bdev_name": "Nvme0n1p1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd1", 00:08:51.644 "bdev_name": "Nvme0n1p2" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd10", 00:08:51.644 "bdev_name": "Nvme1n1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd11", 00:08:51.644 "bdev_name": "Nvme2n1" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd12", 00:08:51.644 "bdev_name": "Nvme2n2" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd13", 00:08:51.644 "bdev_name": "Nvme2n3" 00:08:51.644 }, 00:08:51.644 { 00:08:51.644 "nbd_device": "/dev/nbd14", 00:08:51.644 "bdev_name": "Nvme3n1" 00:08:51.644 } 00:08:51.644 ]' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.644 /dev/nbd1 00:08:51.644 /dev/nbd10 00:08:51.644 /dev/nbd11 00:08:51.644 /dev/nbd12 00:08:51.644 /dev/nbd13 00:08:51.644 /dev/nbd14' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.644 /dev/nbd1 00:08:51.644 /dev/nbd10 00:08:51.644 /dev/nbd11 00:08:51.644 /dev/nbd12 00:08:51.644 /dev/nbd13 00:08:51.644 /dev/nbd14' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@65 -- # count=7 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@95 -- # count=7 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:51.644 09:46:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:51.645 256+0 records in 00:08:51.645 256+0 records out 00:08:51.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00802536 s, 131 MB/s 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.645 09:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.906 256+0 records in 00:08:51.906 256+0 records out 00:08:51.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0614839 s, 17.1 MB/s 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.906 256+0 records in 00:08:51.906 256+0 records out 00:08:51.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0590523 s, 17.8 MB/s 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:51.906 256+0 records in 00:08:51.906 256+0 records out 00:08:51.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0965981 s, 10.9 MB/s 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.906 09:46:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:52.166 256+0 records in 00:08:52.166 256+0 records out 00:08:52.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251469 s, 4.2 MB/s 00:08:52.166 09:46:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.166 09:46:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:52.426 256+0 records in 00:08:52.426 256+0 records out 00:08:52.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148595 s, 7.1 MB/s 00:08:52.426 09:46:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.426 09:46:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:52.687 256+0 records in 00:08:52.687 256+0 records out 00:08:52.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226593 s, 4.6 MB/s 00:08:52.687 09:46:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.687 09:46:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:52.949 256+0 records in 00:08:52.949 256+0 records out 00:08:52.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239712 s, 4.4 MB/s 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@51 -- # local i 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.949 09:46:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@41 -- # break 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@41 -- # break 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.211 09:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:53.476 09:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:53.476 09:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:53.476 09:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:53.476 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.476 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.477 09:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:53.477 09:46:42 -- bdev/nbd_common.sh@41 -- # break 00:08:53.477 09:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.477 09:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.477 09:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@41 -- # break 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.738 09:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@41 -- # break 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.999 09:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.999 09:46:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@41 -- # break 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@41 -- # break 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.261 09:46:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@65 -- # true 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@65 -- # count=0 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@104 -- # count=0 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@109 -- # return 0 00:08:54.523 09:46:43 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:54.523 09:46:43 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:54.785 malloc_lvol_verify 00:08:54.785 09:46:43 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:55.045 a37a9e2b-d566-4285-b336-ec2ab7440d7b 00:08:55.045 09:46:43 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:55.045 04de5a03-43ae-473a-a93f-7480c47c4f0f 00:08:55.045 09:46:44 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:55.305 /dev/nbd0 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:55.305 mke2fs 1.47.0 (5-Feb-2023) 00:08:55.305 Discarding device blocks: 0/4096 done 00:08:55.305 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:55.305 00:08:55.305 Allocating group tables: 0/1 done 00:08:55.305 Writing inode tables: 0/1 done 00:08:55.305 Creating journal (1024 blocks): done 00:08:55.305 Writing superblocks and filesystem accounting information: 0/1 done 00:08:55.305 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@51 -- # local i 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.305 09:46:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@41 -- # break 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:55.565 09:46:44 -- bdev/nbd_common.sh@147 -- # return 0 00:08:55.565 09:46:44 -- bdev/blockdev.sh@324 -- # killprocess 62081 00:08:55.565 09:46:44 -- common/autotest_common.sh@936 -- # '[' -z 62081 ']' 00:08:55.565 09:46:44 -- common/autotest_common.sh@940 -- # kill -0 62081 00:08:55.565 09:46:44 -- common/autotest_common.sh@941 -- # uname 00:08:55.565 09:46:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.565 09:46:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62081 00:08:55.565 killing process with pid 62081 00:08:55.565 09:46:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:55.565 09:46:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:55.565 09:46:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62081' 00:08:55.565 09:46:44 -- common/autotest_common.sh@955 -- # kill 62081 00:08:55.565 09:46:44 -- common/autotest_common.sh@960 -- # wait 62081 00:08:56.506 ************************************ 00:08:56.506 END TEST bdev_nbd 00:08:56.506 ************************************ 00:08:56.506 09:46:45 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:56.506 00:08:56.506 real 0m10.694s 00:08:56.506 user 0m14.809s 00:08:56.506 sys 0m3.434s 00:08:56.506 09:46:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.506 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.506 09:46:45 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:56.506 09:46:45 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:56.506 09:46:45 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:56.506 skipping fio tests on NVMe due to multi-ns failures. 00:08:56.506 09:46:45 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:56.506 09:46:45 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:56.506 09:46:45 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:56.506 09:46:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:56.506 09:46:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.506 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.506 ************************************ 00:08:56.506 START TEST bdev_verify 00:08:56.506 ************************************ 00:08:56.506 09:46:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:56.506 [2024-12-15 09:46:45.461799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.506 [2024-12-15 09:46:45.461910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:08:56.768 [2024-12-15 09:46:45.611085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.029 [2024-12-15 09:46:45.817939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.029 [2024-12-15 09:46:45.818056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.601 Running I/O for 5 seconds... 00:09:02.895 00:09:02.895 Latency(us) 00:09:02.895 [2024-12-15T09:46:51.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.895 [2024-12-15T09:46:51.911Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.895 Verification LBA range: start 0x0 length 0x5e800 00:09:02.895 Nvme0n1p1 : 5.06 1992.15 7.78 0.00 0.00 63991.96 9124.63 89935.56 00:09:02.895 [2024-12-15T09:46:51.911Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x5e800 length 0x5e800 00:09:02.896 Nvme0n1p1 : 5.07 2082.57 8.14 0.00 0.00 60991.87 7108.14 60494.77 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0x5e7ff 00:09:02.896 Nvme0n1p2 : 5.07 1997.90 7.80 0.00 0.00 63767.40 4738.76 68560.74 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:02.896 Nvme0n1p2 : 5.08 2080.68 8.13 0.00 0.00 60974.65 10889.06 78239.90 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0xa0000 00:09:02.896 Nvme1n1 : 5.08 1995.97 7.80 0.00 0.00 63730.89 9074.22 66544.25 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0xa0000 length 0xa0000 00:09:02.896 Nvme1n1 : 5.06 2080.18 8.13 0.00 0.00 61322.93 10334.52 61704.66 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0x80000 00:09:02.896 Nvme2n1 : 5.08 1994.44 7.79 0.00 0.00 63687.21 11998.13 63721.16 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x80000 length 0x80000 00:09:02.896 Nvme2n1 : 5.06 2079.54 8.12 0.00 0.00 61305.57 10183.29 61704.66 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0x80000 00:09:02.896 Nvme2n2 : 5.08 1993.94 7.79 0.00 0.00 63642.92 12603.08 67350.84 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x80000 length 0x80000 00:09:02.896 Nvme2n2 : 5.07 2084.71 8.14 0.00 0.00 61097.03 6553.60 59688.17 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0x80000 00:09:02.896 Nvme2n3 : 5.08 1993.43 7.79 0.00 0.00 63595.68 12905.55 66947.54 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x80000 length 0x80000 00:09:02.896 Nvme2n3 : 5.07 2083.91 8.14 0.00 0.00 61069.84 7208.96 58074.98 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x0 length 0x20000 00:09:02.896 Nvme3n1 : 5.09 1992.90 7.78 0.00 0.00 63545.16 13510.50 67754.14 00:09:02.896 [2024-12-15T09:46:51.912Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:02.896 Verification LBA range: start 0x20000 length 0x20000 00:09:02.896 Nvme3n1 : 5.07 2083.26 8.14 0.00 0.00 61026.32 7007.31 58478.28 00:09:02.896 [2024-12-15T09:46:51.912Z] =================================================================================================================== 00:09:02.896 [2024-12-15T09:46:51.912Z] Total : 28535.59 111.47 0.00 0.00 62383.93 4738.76 89935.56 00:09:05.445 00:09:05.445 real 0m8.660s 00:09:05.445 user 0m16.014s 00:09:05.445 sys 0m0.312s 00:09:05.445 ************************************ 00:09:05.445 END TEST bdev_verify 00:09:05.445 ************************************ 00:09:05.445 09:46:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.445 09:46:54 -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 09:46:54 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.445 09:46:54 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:05.445 09:46:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.445 09:46:54 -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 ************************************ 00:09:05.445 START TEST bdev_verify_big_io 00:09:05.445 ************************************ 00:09:05.445 09:46:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.445 [2024-12-15 09:46:54.220576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:05.445 [2024-12-15 09:46:54.220712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62596 ] 00:09:05.445 [2024-12-15 09:46:54.373820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.706 [2024-12-15 09:46:54.597055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.706 [2024-12-15 09:46:54.597142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.649 Running I/O for 5 seconds... 00:09:13.222 00:09:13.222 Latency(us) 00:09:13.222 [2024-12-15T09:47:02.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.222 [2024-12-15T09:47:02.238Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.222 Verification LBA range: start 0x0 length 0x5e80 00:09:13.222 Nvme0n1p1 : 5.43 169.09 10.57 0.00 0.00 730151.02 116956.55 1116330.14 00:09:13.222 [2024-12-15T09:47:02.238Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.222 Verification LBA range: start 0x5e80 length 0x5e80 00:09:13.222 Nvme0n1p1 : 5.43 230.82 14.43 0.00 0.00 540160.19 83482.78 787238.60 00:09:13.222 [2024-12-15T09:47:02.238Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.222 Verification LBA range: start 0x0 length 0x5e7f 00:09:13.222 Nvme0n1p2 : 5.48 175.42 10.96 0.00 0.00 694906.20 46984.27 1032444.06 00:09:13.222 [2024-12-15T09:47:02.238Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.222 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:13.222 Nvme0n1p2 : 5.43 230.76 14.42 0.00 0.00 532772.72 83482.78 732390.01 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x0 length 0xa000 00:09:13.223 Nvme1n1 : 5.50 183.92 11.50 0.00 0.00 654223.63 16333.59 942105.21 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0xa000 length 0xa000 00:09:13.223 Nvme1n1 : 5.43 230.71 14.42 0.00 0.00 525465.07 83482.78 674315.03 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x0 length 0x8000 00:09:13.223 Nvme2n1 : 5.52 190.80 11.93 0.00 0.00 617371.91 14417.92 851766.35 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x8000 length 0x8000 00:09:13.223 Nvme2n1 : 5.45 238.15 14.88 0.00 0.00 507308.81 13107.20 619466.44 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x0 length 0x8000 00:09:13.223 Nvme2n2 : 5.54 196.75 12.30 0.00 0.00 585114.41 14417.92 758201.11 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x8000 length 0x8000 00:09:13.223 Nvme2n2 : 5.47 246.67 15.42 0.00 0.00 485316.97 20769.87 583976.17 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x0 length 0x8000 00:09:13.223 Nvme2n3 : 5.61 247.84 15.49 0.00 0.00 456591.54 6402.36 1051802.39 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x8000 length 0x8000 00:09:13.223 Nvme2n3 : 5.47 246.60 15.41 0.00 0.00 478373.97 21374.82 506542.87 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x0 length 0x2000 00:09:13.223 Nvme3n1 : 5.70 371.82 23.24 0.00 0.00 299039.12 288.30 1361535.61 00:09:13.223 [2024-12-15T09:47:02.239Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:13.223 Verification LBA range: start 0x2000 length 0x2000 00:09:13.223 Nvme3n1 : 5.48 261.10 16.32 0.00 0.00 446613.47 2545.82 548485.91 00:09:13.223 [2024-12-15T09:47:02.239Z] =================================================================================================================== 00:09:13.223 [2024-12-15T09:47:02.239Z] Total : 3220.44 201.28 0.00 0.00 516479.30 288.30 1361535.61 00:09:13.804 00:09:13.804 real 0m8.411s 00:09:13.804 user 0m15.601s 00:09:13.804 sys 0m0.343s 00:09:13.804 ************************************ 00:09:13.804 END TEST bdev_verify_big_io 00:09:13.804 ************************************ 00:09:13.804 09:47:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.804 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:09:13.804 09:47:02 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.804 09:47:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:13.804 09:47:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.804 09:47:02 -- common/autotest_common.sh@10 -- # set +x 00:09:13.804 ************************************ 00:09:13.804 START TEST bdev_write_zeroes 00:09:13.804 ************************************ 00:09:13.804 09:47:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.804 [2024-12-15 09:47:02.668502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.804 [2024-12-15 09:47:02.668612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62711 ] 00:09:13.804 [2024-12-15 09:47:02.809278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.094 [2024-12-15 09:47:02.984711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.662 Running I/O for 1 seconds... 00:09:15.597 00:09:15.597 Latency(us) 00:09:15.597 [2024-12-15T09:47:04.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme0n1p1 : 1.01 9345.53 36.51 0.00 0.00 13666.11 6351.95 24702.03 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme0n1p2 : 1.01 9334.13 36.46 0.00 0.00 13659.20 5873.03 23895.43 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme1n1 : 1.02 9323.48 36.42 0.00 0.00 13650.64 8418.86 21576.47 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme2n1 : 1.02 9312.23 36.38 0.00 0.00 13647.81 8822.15 20769.87 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme2n2 : 1.02 9301.09 36.33 0.00 0.00 13645.11 9225.45 20064.10 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme2n3 : 1.02 9289.83 36.29 0.00 0.00 13619.11 9225.45 19963.27 00:09:15.597 [2024-12-15T09:47:04.613Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.597 Nvme3n1 : 1.02 9278.72 36.25 0.00 0.00 13609.74 8721.33 20164.92 00:09:15.597 [2024-12-15T09:47:04.613Z] =================================================================================================================== 00:09:15.597 [2024-12-15T09:47:04.613Z] Total : 65185.00 254.63 0.00 0.00 13642.53 5873.03 24702.03 00:09:16.540 00:09:16.540 real 0m2.820s 00:09:16.540 user 0m2.527s 00:09:16.540 sys 0m0.178s 00:09:16.540 ************************************ 00:09:16.540 END TEST bdev_write_zeroes 00:09:16.540 ************************************ 00:09:16.540 09:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.540 09:47:05 -- common/autotest_common.sh@10 -- # set +x 00:09:16.540 09:47:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.540 09:47:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:16.540 09:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.540 09:47:05 -- common/autotest_common.sh@10 -- # set +x 00:09:16.540 ************************************ 00:09:16.540 START TEST bdev_json_nonenclosed 00:09:16.540 ************************************ 00:09:16.540 09:47:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.540 [2024-12-15 09:47:05.547150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.540 [2024-12-15 09:47:05.547280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62764 ] 00:09:16.801 [2024-12-15 09:47:05.696442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.061 [2024-12-15 09:47:05.873361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.061 [2024-12-15 09:47:05.873507] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:17.061 [2024-12-15 09:47:05.873524] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:17.322 00:09:17.322 real 0m0.665s 00:09:17.322 user 0m0.468s 00:09:17.322 sys 0m0.094s 00:09:17.322 ************************************ 00:09:17.322 END TEST bdev_json_nonenclosed 00:09:17.322 ************************************ 00:09:17.322 09:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.322 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:17.322 09:47:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.322 09:47:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:17.322 09:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.322 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:17.322 ************************************ 00:09:17.322 START TEST bdev_json_nonarray 00:09:17.322 ************************************ 00:09:17.322 09:47:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.322 [2024-12-15 09:47:06.273044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:17.322 [2024-12-15 09:47:06.273170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62784 ] 00:09:17.583 [2024-12-15 09:47:06.417066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.583 [2024-12-15 09:47:06.596011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.583 [2024-12-15 09:47:06.596168] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:17.583 [2024-12-15 09:47:06.596192] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.155 00:09:18.155 real 0m0.663s 00:09:18.155 user 0m0.465s 00:09:18.155 sys 0m0.094s 00:09:18.155 09:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.155 ************************************ 00:09:18.155 END TEST bdev_json_nonarray 00:09:18.155 ************************************ 00:09:18.155 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:18.155 09:47:06 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:18.155 09:47:06 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:18.155 09:47:06 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:18.155 09:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.155 09:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.155 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:18.155 ************************************ 00:09:18.155 START TEST bdev_gpt_uuid 00:09:18.155 ************************************ 00:09:18.155 09:47:06 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:18.155 09:47:06 -- bdev/blockdev.sh@612 -- # local bdev 00:09:18.155 09:47:06 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:18.155 09:47:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62815 00:09:18.155 09:47:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:18.155 09:47:06 -- bdev/blockdev.sh@47 -- # waitforlisten 62815 00:09:18.155 09:47:06 -- common/autotest_common.sh@829 -- # '[' -z 62815 ']' 00:09:18.155 09:47:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.155 09:47:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.155 09:47:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.155 09:47:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.155 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:09:18.155 09:47:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:18.155 [2024-12-15 09:47:07.013671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.155 [2024-12-15 09:47:07.013782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62815 ] 00:09:18.155 [2024-12-15 09:47:07.161924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.415 [2024-12-15 09:47:07.335383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.415 [2024-12-15 09:47:07.335594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.801 09:47:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.801 09:47:08 -- common/autotest_common.sh@862 -- # return 0 00:09:19.801 09:47:08 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.801 09:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.801 09:47:08 -- common/autotest_common.sh@10 -- # set +x 00:09:19.801 Some configs were skipped because the RPC state that can call them passed over. 00:09:19.801 09:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:20.062 09:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.062 09:47:08 -- common/autotest_common.sh@10 -- # set +x 00:09:20.062 09:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:20.062 09:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.062 09:47:08 -- common/autotest_common.sh@10 -- # set +x 00:09:20.062 09:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:20.062 { 00:09:20.062 "name": "Nvme0n1p1", 00:09:20.062 "aliases": [ 00:09:20.062 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:20.062 ], 00:09:20.062 "product_name": "GPT Disk", 00:09:20.062 "block_size": 4096, 00:09:20.062 "num_blocks": 774144, 00:09:20.062 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:20.062 "md_size": 64, 00:09:20.062 "md_interleave": false, 00:09:20.062 "dif_type": 0, 00:09:20.062 "assigned_rate_limits": { 00:09:20.062 "rw_ios_per_sec": 0, 00:09:20.062 "rw_mbytes_per_sec": 0, 00:09:20.062 "r_mbytes_per_sec": 0, 00:09:20.062 "w_mbytes_per_sec": 0 00:09:20.062 }, 00:09:20.062 "claimed": false, 00:09:20.062 "zoned": false, 00:09:20.062 "supported_io_types": { 00:09:20.062 "read": true, 00:09:20.062 "write": true, 00:09:20.062 "unmap": true, 00:09:20.062 "write_zeroes": true, 00:09:20.062 "flush": true, 00:09:20.062 "reset": true, 00:09:20.062 "compare": true, 00:09:20.062 "compare_and_write": false, 00:09:20.062 "abort": true, 00:09:20.062 "nvme_admin": false, 00:09:20.062 "nvme_io": false 00:09:20.062 }, 00:09:20.062 "driver_specific": { 00:09:20.062 "gpt": { 00:09:20.062 "base_bdev": "Nvme0n1", 00:09:20.062 "offset_blocks": 256, 00:09:20.062 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:20.062 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:20.062 "partition_name": "SPDK_TEST_first" 00:09:20.062 } 00:09:20.062 } 00:09:20.062 } 00:09:20.062 ]' 00:09:20.062 09:47:08 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:20.062 09:47:08 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:20.062 09:47:08 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:20.062 09:47:08 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:20.062 09:47:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.062 09:47:08 -- common/autotest_common.sh@10 -- # set +x 00:09:20.062 09:47:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.062 09:47:08 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:20.062 { 00:09:20.062 "name": "Nvme0n1p2", 00:09:20.062 "aliases": [ 00:09:20.062 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:20.062 ], 00:09:20.062 "product_name": "GPT Disk", 00:09:20.062 "block_size": 4096, 00:09:20.062 "num_blocks": 774143, 00:09:20.062 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:20.062 "md_size": 64, 00:09:20.062 "md_interleave": false, 00:09:20.062 "dif_type": 0, 00:09:20.062 "assigned_rate_limits": { 00:09:20.062 "rw_ios_per_sec": 0, 00:09:20.062 "rw_mbytes_per_sec": 0, 00:09:20.062 "r_mbytes_per_sec": 0, 00:09:20.062 "w_mbytes_per_sec": 0 00:09:20.062 }, 00:09:20.062 "claimed": false, 00:09:20.062 "zoned": false, 00:09:20.062 "supported_io_types": { 00:09:20.062 "read": true, 00:09:20.062 "write": true, 00:09:20.062 "unmap": true, 00:09:20.062 "write_zeroes": true, 00:09:20.062 "flush": true, 00:09:20.062 "reset": true, 00:09:20.062 "compare": true, 00:09:20.062 "compare_and_write": false, 00:09:20.062 "abort": true, 00:09:20.062 "nvme_admin": false, 00:09:20.062 "nvme_io": false 00:09:20.062 }, 00:09:20.062 "driver_specific": { 00:09:20.062 "gpt": { 00:09:20.062 "base_bdev": "Nvme0n1", 00:09:20.062 "offset_blocks": 774400, 00:09:20.062 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:20.062 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:20.062 "partition_name": "SPDK_TEST_second" 00:09:20.062 } 00:09:20.062 } 00:09:20.062 } 00:09:20.062 ]' 00:09:20.062 09:47:08 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:20.062 09:47:08 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:20.063 09:47:08 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:20.063 09:47:09 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:20.063 09:47:09 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:20.063 09:47:09 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:20.063 09:47:09 -- bdev/blockdev.sh@629 -- # killprocess 62815 00:09:20.063 09:47:09 -- common/autotest_common.sh@936 -- # '[' -z 62815 ']' 00:09:20.063 09:47:09 -- common/autotest_common.sh@940 -- # kill -0 62815 00:09:20.063 09:47:09 -- common/autotest_common.sh@941 -- # uname 00:09:20.063 09:47:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:20.063 09:47:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62815 00:09:20.063 09:47:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:20.063 killing process with pid 62815 00:09:20.063 09:47:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:20.063 09:47:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62815' 00:09:20.063 09:47:09 -- common/autotest_common.sh@955 -- # kill 62815 00:09:20.063 09:47:09 -- common/autotest_common.sh@960 -- # wait 62815 00:09:21.975 00:09:21.975 real 0m3.611s 00:09:21.975 user 0m3.874s 00:09:21.975 sys 0m0.384s 00:09:21.975 09:47:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.975 ************************************ 00:09:21.975 END TEST bdev_gpt_uuid 00:09:21.975 ************************************ 00:09:21.975 09:47:10 -- common/autotest_common.sh@10 -- # set +x 00:09:21.975 09:47:10 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:21.975 09:47:10 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:21.975 09:47:10 -- bdev/blockdev.sh@809 -- # cleanup 00:09:21.975 09:47:10 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:21.975 09:47:10 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:21.975 09:47:10 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:21.975 09:47:10 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:21.975 09:47:10 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:21.975 09:47:10 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.237 Waiting for block devices as requested 00:09:22.237 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.237 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.237 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.498 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.780 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:27.780 09:47:16 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:27.780 09:47:16 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:27.780 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:27.780 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:27.780 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:27.780 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:27.780 09:47:16 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:27.780 00:09:27.780 real 0m58.510s 00:09:27.780 user 1m14.091s 00:09:27.780 sys 0m8.105s 00:09:27.780 09:47:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.780 09:47:16 -- common/autotest_common.sh@10 -- # set +x 00:09:27.780 ************************************ 00:09:27.780 END TEST blockdev_nvme_gpt 00:09:27.780 ************************************ 00:09:27.780 09:47:16 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:27.780 09:47:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.780 09:47:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.780 09:47:16 -- common/autotest_common.sh@10 -- # set +x 00:09:27.780 ************************************ 00:09:27.780 START TEST nvme 00:09:27.780 ************************************ 00:09:27.780 09:47:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:27.780 * Looking for test storage... 00:09:27.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.780 09:47:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:27.780 09:47:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:27.780 09:47:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:28.038 09:47:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:28.038 09:47:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:28.038 09:47:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:28.038 09:47:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:28.038 09:47:16 -- scripts/common.sh@335 -- # IFS=.-: 00:09:28.038 09:47:16 -- scripts/common.sh@335 -- # read -ra ver1 00:09:28.038 09:47:16 -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.038 09:47:16 -- scripts/common.sh@336 -- # read -ra ver2 00:09:28.038 09:47:16 -- scripts/common.sh@337 -- # local 'op=<' 00:09:28.038 09:47:16 -- scripts/common.sh@339 -- # ver1_l=2 00:09:28.038 09:47:16 -- scripts/common.sh@340 -- # ver2_l=1 00:09:28.038 09:47:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:28.038 09:47:16 -- scripts/common.sh@343 -- # case "$op" in 00:09:28.038 09:47:16 -- scripts/common.sh@344 -- # : 1 00:09:28.038 09:47:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:28.038 09:47:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.038 09:47:16 -- scripts/common.sh@364 -- # decimal 1 00:09:28.038 09:47:16 -- scripts/common.sh@352 -- # local d=1 00:09:28.038 09:47:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.038 09:47:16 -- scripts/common.sh@354 -- # echo 1 00:09:28.038 09:47:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:28.038 09:47:16 -- scripts/common.sh@365 -- # decimal 2 00:09:28.038 09:47:16 -- scripts/common.sh@352 -- # local d=2 00:09:28.038 09:47:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.038 09:47:16 -- scripts/common.sh@354 -- # echo 2 00:09:28.038 09:47:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:28.038 09:47:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:28.038 09:47:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:28.038 09:47:16 -- scripts/common.sh@367 -- # return 0 00:09:28.038 09:47:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.038 09:47:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.038 --rc genhtml_branch_coverage=1 00:09:28.038 --rc genhtml_function_coverage=1 00:09:28.038 --rc genhtml_legend=1 00:09:28.038 --rc geninfo_all_blocks=1 00:09:28.038 --rc geninfo_unexecuted_blocks=1 00:09:28.038 00:09:28.038 ' 00:09:28.038 09:47:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.038 --rc genhtml_branch_coverage=1 00:09:28.038 --rc genhtml_function_coverage=1 00:09:28.038 --rc genhtml_legend=1 00:09:28.038 --rc geninfo_all_blocks=1 00:09:28.038 --rc geninfo_unexecuted_blocks=1 00:09:28.038 00:09:28.038 ' 00:09:28.038 09:47:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:28.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.038 --rc genhtml_branch_coverage=1 00:09:28.038 --rc genhtml_function_coverage=1 00:09:28.038 --rc genhtml_legend=1 00:09:28.038 --rc geninfo_all_blocks=1 00:09:28.039 --rc geninfo_unexecuted_blocks=1 00:09:28.039 00:09:28.039 ' 00:09:28.039 09:47:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:28.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.039 --rc genhtml_branch_coverage=1 00:09:28.039 --rc genhtml_function_coverage=1 00:09:28.039 --rc genhtml_legend=1 00:09:28.039 --rc geninfo_all_blocks=1 00:09:28.039 --rc geninfo_unexecuted_blocks=1 00:09:28.039 00:09:28.039 ' 00:09:28.039 09:47:16 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.864 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 09:47:17 -- nvme/nvme.sh@79 -- # uname 00:09:28.864 09:47:17 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:28.864 09:47:17 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:28.864 09:47:17 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:28.864 09:47:17 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:28.864 09:47:17 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:28.864 09:47:17 -- common/autotest_common.sh@1055 -- # echo 0 00:09:28.864 09:47:17 -- common/autotest_common.sh@1057 -- # stubpid=63485 00:09:28.864 Waiting for stub to ready for secondary processes... 00:09:28.864 09:47:17 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:28.864 09:47:17 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:28.864 09:47:17 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63485 ]] 00:09:28.864 09:47:17 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:28.864 09:47:17 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:28.864 [2024-12-15 09:47:17.849914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.864 [2024-12-15 09:47:17.850019] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.806 [2024-12-15 09:47:18.600142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.806 [2024-12-15 09:47:18.772860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.806 [2024-12-15 09:47:18.773496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.806 [2024-12-15 09:47:18.773506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.806 [2024-12-15 09:47:18.793298] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:29.806 [2024-12-15 09:47:18.803385] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:29.806 [2024-12-15 09:47:18.803684] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:29.806 [2024-12-15 09:47:18.815428] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:29.806 [2024-12-15 09:47:18.815604] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:29.806 [2024-12-15 09:47:18.815713] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:29.806 09:47:18 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:29.806 09:47:18 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63485 ]] 00:09:29.806 09:47:18 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:30.067 [2024-12-15 09:47:18.823454] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.067 [2024-12-15 09:47:18.823600] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:30.067 [2024-12-15 09:47:18.823694] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:30.067 [2024-12-15 09:47:18.831071] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.067 [2024-12-15 09:47:18.831190] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:30.067 [2024-12-15 09:47:18.831291] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:30.067 [2024-12-15 09:47:18.831367] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:30.067 [2024-12-15 09:47:18.831478] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:31.007 done. 00:09:31.007 09:47:19 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:31.007 09:47:19 -- common/autotest_common.sh@1064 -- # echo done. 00:09:31.007 09:47:19 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:31.007 09:47:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:31.007 09:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.007 09:47:19 -- common/autotest_common.sh@10 -- # set +x 00:09:31.007 ************************************ 00:09:31.007 START TEST nvme_reset 00:09:31.007 ************************************ 00:09:31.007 09:47:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:31.268 Initializing NVMe Controllers 00:09:31.268 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:31.268 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:31.268 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:31.268 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:31.268 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:31.268 00:09:31.268 real 0m0.195s 00:09:31.268 user 0m0.056s 00:09:31.268 sys 0m0.097s 00:09:31.268 09:47:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.268 ************************************ 00:09:31.268 END TEST nvme_reset 00:09:31.268 ************************************ 00:09:31.268 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.268 09:47:20 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:31.268 09:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.268 09:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.268 09:47:20 -- common/autotest_common.sh@10 -- # set +x 00:09:31.268 ************************************ 00:09:31.268 START TEST nvme_identify 00:09:31.268 ************************************ 00:09:31.268 09:47:20 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:31.268 09:47:20 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:31.268 09:47:20 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:31.268 09:47:20 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.268 09:47:20 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:31.268 09:47:20 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:31.268 09:47:20 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:31.268 09:47:20 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.268 09:47:20 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.268 09:47:20 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:31.268 09:47:20 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:31.268 09:47:20 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:31.268 09:47:20 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:31.532 ===================================================== 00:09:31.532 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:31.532 ===================================================== 00:09:31.532 Controller Capabilities/Features 00:09:31.532 ================================ 00:09:31.532 Vendor ID: 1b36 00:09:31.532 Subsystem Vendor ID: 1af4 00:09:31.532 Serial Number: 12343 00:09:31.532 Model Number: QEMU NVMe Ctrl 00:09:31.532 Firmware Version: 8.0.0 00:09:31.532 Recommended Arb Burst: 6 00:09:31.532 IEEE OUI Identifier: 00 54 52 00:09:31.532 Multi-path I/O 00:09:31.532 May have multiple subsystem ports: No 00:09:31.532 May have multiple controllers: Yes 00:09:31.532 Associated with SR-IOV VF: No 00:09:31.532 Max Data Transfer Size: 524288 00:09:31.532 Max Number of Namespaces: 256 00:09:31.532 Max Number of I/O Queues: 64 00:09:31.532 NVMe Specification Version (VS): 1.4 00:09:31.532 NVMe Specification Version (Identify): 1.4 00:09:31.532 Maximum Queue Entries: 2048 00:09:31.532 Contiguous Queues Required: Yes 00:09:31.532 Arbitration Mechanisms Supported 00:09:31.532 Weighted Round Robin: Not Supported 00:09:31.532 Vendor Specific: Not Supported 00:09:31.532 Reset Timeout: 7500 ms 00:09:31.532 Doorbell Stride: 4 bytes 00:09:31.532 NVM Subsystem Reset: Not Supported 00:09:31.532 Command Sets Supported 00:09:31.532 NVM Command Set: Supported 00:09:31.532 Boot Partition: Not Supported 00:09:31.532 Memory Page Size Minimum: 4096 bytes 00:09:31.532 Memory Page Size Maximum: 65536 bytes 00:09:31.532 Persistent Memory Region: Not Supported 00:09:31.532 Optional Asynchronous Events Supported 00:09:31.532 Namespace Attribute Notices: Supported 00:09:31.532 Firmware Activation Notices: Not Supported 00:09:31.532 ANA Change Notices: Not Supported 00:09:31.532 PLE Aggregate Log Change Notices: Not Supported 00:09:31.532 LBA Status Info Alert Notices: Not Supported 00:09:31.532 EGE Aggregate Log Change Notices: Not Supported 00:09:31.532 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.532 Zone Descriptor Change Notices: Not Supported 00:09:31.532 Discovery Log Change Notices: Not Supported 00:09:31.532 Controller Attributes 00:09:31.532 128-bit Host Identifier: Not Supported 00:09:31.532 Non-Operational Permissive Mode: Not Supported 00:09:31.532 NVM Sets: Not Supported 00:09:31.532 Read Recovery Levels: Not Supported 00:09:31.532 Endurance Groups: Supported 00:09:31.532 Predictable Latency Mode: Not Supported 00:09:31.532 Traffic Based Keep ALive: Not Supported 00:09:31.532 Namespace Granularity: Not Supported 00:09:31.532 SQ Associations: Not Supported 00:09:31.532 UUID List: Not Supported 00:09:31.532 Multi-Domain Subsystem: Not Supported 00:09:31.532 Fixed Capacity Management: Not Supported 00:09:31.532 Variable Capacity Management: Not Supported 00:09:31.532 Delete Endurance Group: Not Supported 00:09:31.532 Delete NVM Set: Not Supported 00:09:31.532 Extended LBA Formats Supported: Supported 00:09:31.532 Flexible Data Placement Supported: Supported 00:09:31.532 00:09:31.532 Controller Memory Buffer Support 00:09:31.533 ================================ 00:09:31.533 Supported: No 00:09:31.533 00:09:31.533 Persistent Memory Region Support 00:09:31.533 ================================ 00:09:31.533 Supported: No 00:09:31.533 00:09:31.533 Admin Command Set Attributes 00:09:31.533 ============================ 00:09:31.533 Security Send/Receive: Not Supported 00:09:31.533 Format NVM: Supported 00:09:31.533 Firmware Activate/Download: Not Supported 00:09:31.533 Namespace Management: Supported 00:09:31.533 Device Self-Test: Not Supported 00:09:31.533 Directives: Supported 00:09:31.533 NVMe-MI: Not Supported 00:09:31.533 Virtualization Management: Not Supported 00:09:31.533 Doorbell Buffer Config: Supported 00:09:31.533 Get LBA Status Capability: Not Supported 00:09:31.533 Command & Feature Lockdown Capability: Not Supported 00:09:31.533 Abort Command Limit: 4 00:09:31.533 Async Event Request Limit: 4 00:09:31.533 Number of Firmware Slots: N/A 00:09:31.533 Firmware Slot 1 Read-Only: N/A 00:09:31.533 Firmware Activation Without Reset: N/A 00:09:31.533 Multiple Update Detection Support: N/A 00:09:31.533 Firmware Update Granularity: No Information Provided 00:09:31.533 Per-Namespace SMART Log: Yes 00:09:31.533 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.533 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:31.533 Command Effects Log Page: Supported 00:09:31.533 Get Log Page Extended Data: Supported 00:09:31.533 Telemetry Log Pages: Not Supported 00:09:31.533 Persistent Event Log Pages: Not Supported 00:09:31.533 Supported Log Pages Log Page: May Support 00:09:31.533 Commands Supported & Effects Log Page: Not Supported 00:09:31.533 Feature Identifiers & Effects Log Page:May Support 00:09:31.533 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.533 Data Area 4 for Telemetry Log: Not Supported 00:09:31.533 Error Log Page Entries Supported: 1 00:09:31.533 Keep Alive: Not Supported 00:09:31.533 00:09:31.533 NVM Command Set Attributes 00:09:31.533 ========================== 00:09:31.533 Submission Queue Entry Size 00:09:31.533 Max: 64 00:09:31.533 Min: 64 00:09:31.533 Completion Queue Entry Size 00:09:31.533 Max: 16 00:09:31.533 Min: 16 00:09:31.533 Number of Namespaces: 256 00:09:31.533 Compare Command: Supported 00:09:31.533 Write Uncorrectable Command: Not Supported 00:09:31.533 Dataset Management Command: Supported 00:09:31.533 Write Zeroes Command: Supported 00:09:31.533 Set Features Save Field: Supported 00:09:31.533 Reservations: Not Supported 00:09:31.533 Timestamp: Supported 00:09:31.533 Copy: Supported 00:09:31.533 Volatile Write Cache: Present 00:09:31.533 Atomic Write Unit (Normal): 1 00:09:31.533 Atomic Write Unit (PFail): 1 00:09:31.533 Atomic Compare & Write Unit: 1 00:09:31.533 Fused Compare & Write: Not Supported 00:09:31.533 Scatter-Gather List 00:09:31.533 SGL Command Set: Supported 00:09:31.533 SGL Keyed: Not Supported 00:09:31.533 SGL Bit Bucket Descriptor: Not Supported 00:09:31.533 SGL Metadata Pointer: Not Supported 00:09:31.533 Oversized SGL: Not Supported 00:09:31.533 SGL Metadata Address: Not Supported 00:09:31.533 SGL Offset: Not Supported 00:09:31.533 Transport SGL Data Block: Not Supported 00:09:31.533 Replay Protected Memory Block: Not Supported 00:09:31.533 00:09:31.533 Firmware Slot Information 00:09:31.533 ========================= 00:09:31.533 Active slot: 1 00:09:31.533 Slot 1 Firmware Revision: 1.0 00:09:31.533 00:09:31.533 00:09:31.533 Commands Supported and Effects 00:09:31.533 ============================== 00:09:31.533 Admin Commands 00:09:31.533 -------------- 00:09:31.533 Delete I/O Submission Queue (00h): Supported 00:09:31.533 Create I/O Submission Queue (01h): Supported 00:09:31.533 Get Log Page (02h): Supported 00:09:31.533 Delete I/O Completion Queue (04h): Supported 00:09:31.533 Create I/O Completion Queue (05h): Supported 00:09:31.533 Identify (06h): Supported 00:09:31.533 Abort (08h): Supported 00:09:31.533 Set Features (09h): Supported 00:09:31.533 Get Features (0Ah): Supported 00:09:31.533 Asynchronous Event Request (0Ch): Supported 00:09:31.533 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.533 Directive Send (19h): Supported 00:09:31.533 Directive Receive (1Ah): Supported 00:09:31.533 Virtualization Management (1Ch): Supported 00:09:31.533 Doorbell Buffer Config (7Ch): Supported 00:09:31.533 Format NVM (80h): Supported LBA-Change 00:09:31.533 I/O Commands 00:09:31.533 ------------ 00:09:31.533 Flush (00h): Supported LBA-Change 00:09:31.533 Write (01h): Supported LBA-Change 00:09:31.533 Read (02h): Supported 00:09:31.533 Compare (05h): Supported 00:09:31.533 Write Zeroes (08h): Supported LBA-Change 00:09:31.533 Dataset Management (09h): Supported LBA-Change 00:09:31.533 Unknown (0Ch): Supported 00:09:31.533 Unknown (12h): Supported 00:09:31.533 Copy (19h): Supported LBA-Change 00:09:31.533 Unknown (1Dh): Supported LBA-Change 00:09:31.533 00:09:31.533 Error Log 00:09:31.533 ========= 00:09:31.533 00:09:31.533 Arbitration 00:09:31.533 =========== 00:09:31.533 Arbitration Burst: no limit 00:09:31.533 00:09:31.533 Power Management 00:09:31.533 ================ 00:09:31.533 Number of Power States: 1 00:09:31.533 Current Power State: Power State #0 00:09:31.533 Power State #0: 00:09:31.533 Max Power: 25.00 W 00:09:31.533 Non-Operational State: Operational 00:09:31.533 Entry Latency: 16 microseconds 00:09:31.533 Exit Latency: 4 microseconds 00:09:31.533 Relative Read Throughput: 0 00:09:31.533 Relative Read Latency: 0 00:09:31.533 Relative Write Throughput: 0 00:09:31.533 Relative Write Latency: 0 00:09:31.533 Idle Power: Not Reported 00:09:31.533 Active Power: Not Reported 00:09:31.533 Non-Operational Permissive Mode: Not Supported 00:09:31.533 00:09:31.533 Health Information 00:09:31.533 ================== 00:09:31.533 Critical Warnings: 00:09:31.533 Available Spare Space: OK 00:09:31.533 Temperature: OK 00:09:31.533 Device Reliability: OK 00:09:31.533 Read Only: No 00:09:31.533 Volatile Memory Backup: OK 00:09:31.533 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.533 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.533 Available Spare: 0% 00:09:31.533 Available Spare Threshold: 0% 00:09:31.533 Life Percentage Used: 0% 00:09:31.533 Data Units Read: 1406 00:09:31.533 Data Units Written: 652 00:09:31.533 Host Read Commands: 58142 00:09:31.533 Host Write Commands: 28527 00:09:31.533 Controller Busy Time: 0 minutes 00:09:31.533 Power Cycles: 0 00:09:31.533 Power On Hours: 0 hours 00:09:31.533 Unsafe Shutdowns: 0 00:09:31.533 Unrecoverable Media Errors: 0 00:09:31.533 Lifetime Error Log Entries: 0 00:09:31.533 Warning Temperature Time: 0 minutes 00:09:31.533 Critical Temperature Time: 0 minutes 00:09:31.533 00:09:31.533 Number of Queues 00:09:31.533 ================ 00:09:31.533 Number of I/O Submission Queues: 64 00:09:31.533 Number of I/O Completion Queues: 64 00:09:31.533 00:09:31.533 ZNS Specific Controller Data 00:09:31.533 ============================ 00:09:31.533 Zone Append Size Limit: 0 00:09:31.533 00:09:31.533 00:09:31.533 Active Namespaces 00:09:31.533 ================= 00:09:31.533 Namespace ID:1 00:09:31.533 Error Recovery Timeout: Unlimited 00:09:31.533 Command Set Identifier: NVM (00h) 00:09:31.533 Deallocate: Supported 00:09:31.533 Deallocated/Unwritten Error: Supported 00:09:31.533 Deallocated Read Value: All 0x00 00:09:31.533 Deallocate in Write Zeroes: Not Supported 00:09:31.533 Deallocated Guard Field: 0xFFFF 00:09:31.533 Flush: Supported 00:09:31.533 Reservation: Not Supported 00:09:31.533 Namespace Sharing Capabilities: Multiple Controllers 00:09:31.533 Size (in LBAs): 262144 (1GiB) 00:09:31.533 Capacity (in LBAs): 262144 (1GiB) 00:09:31.533 Utilization (in LBAs): 262144 (1GiB) 00:09:31.533 Thin Provisioning: Not Supported 00:09:31.533 Per-NS Atomic Units: No 00:09:31.533 Maximum Single Source Range Length: 128 00:09:31.533 Maximum Copy Length: 128 00:09:31.533 Maximum Source Range Count: 128 00:09:31.533 NGUID/EUI64 Never Reused: No 00:09:31.533 Namespace Write Protected: No 00:09:31.533 Endurance group ID: 1 00:09:31.533 Number of LBA Formats: 8 00:09:31.533 Current LBA Format: LBA Format #04 00:09:31.533 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.533 00:09:31.533 Get Feature FDP: 00:09:31.533 ================ 00:09:31.533 Enabled: Yes 00:09:31.533 FDP configuration index: 0 00:09:31.533 00:09:31.533 FDP configurations log page 00:09:31.533 =========================== 00:09:31.533 Number of FDP configurations: 1 00:09:31.533 Version: 0 00:09:31.533 Size: 112 00:09:31.533 FDP Configuration Descriptor: 0 00:09:31.534 Descriptor Size: 96 00:09:31.534 Reclaim Group Identifier format: 2 00:09:31.534 FDP Volatile Write Cache: Not Present 00:09:31.534 FDP Configuration: Valid 00:09:31.534 Vendor Specific Size: 0 00:09:31.534 Number of Reclaim Groups: 2 00:09:31.534 Number of Recalim Unit Handles: 8 00:09:31.534 Max Placement Identifiers: 128 00:09:31.534 Number of Namespaces Suppprted: 256 00:09:31.534 Reclaim unit Nominal Size: 6000000 bytes 00:09:31.534 Estimated Reclaim Unit Time Limit: Not Reported 00:09:31.534 RUH Desc #000: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #001: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #002: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #003: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #004: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #005: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #006: RUH Type: Initially Isolated 00:09:31.534 RUH Desc #007: RUH Type: Initially Isolated 00:09:31.534 00:09:31.534 FDP reclaim unit handle usage log page 00:09:31.534 =================================[2024-12-15 09:47:20.315830] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63527 terminated unexpected 00:09:31.534 [2024-12-15 09:47:20.317866] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63527 terminated unexpected 00:09:31.534 ===== 00:09:31.534 Number of Reclaim Unit Handles: 8 00:09:31.534 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:31.534 RUH Usage Desc #001: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #002: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #003: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #004: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #005: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #006: RUH Attributes: Unused 00:09:31.534 RUH Usage Desc #007: RUH Attributes: Unused 00:09:31.534 00:09:31.534 FDP statistics log page 00:09:31.534 ======================= 00:09:31.534 Host bytes with metadata written: 421486592 00:09:31.534 Media bytes with metadata written: 421564416 00:09:31.534 Media bytes erased: 0 00:09:31.534 00:09:31.534 FDP events log page 00:09:31.534 =================== 00:09:31.534 Number of FDP events: 0 00:09:31.534 00:09:31.534 ===================================================== 00:09:31.534 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:31.534 ===================================================== 00:09:31.534 Controller Capabilities/Features 00:09:31.534 ================================ 00:09:31.534 Vendor ID: 1b36 00:09:31.534 Subsystem Vendor ID: 1af4 00:09:31.534 Serial Number: 12340 00:09:31.534 Model Number: QEMU NVMe Ctrl 00:09:31.534 Firmware Version: 8.0.0 00:09:31.534 Recommended Arb Burst: 6 00:09:31.534 IEEE OUI Identifier: 00 54 52 00:09:31.534 Multi-path I/O 00:09:31.534 May have multiple subsystem ports: No 00:09:31.534 May have multiple controllers: No 00:09:31.534 Associated with SR-IOV VF: No 00:09:31.534 Max Data Transfer Size: 524288 00:09:31.534 Max Number of Namespaces: 256 00:09:31.534 Max Number of I/O Queues: 64 00:09:31.534 NVMe Specification Version (VS): 1.4 00:09:31.534 NVMe Specification Version (Identify): 1.4 00:09:31.534 Maximum Queue Entries: 2048 00:09:31.534 Contiguous Queues Required: Yes 00:09:31.534 Arbitration Mechanisms Supported 00:09:31.534 Weighted Round Robin: Not Supported 00:09:31.534 Vendor Specific: Not Supported 00:09:31.534 Reset Timeout: 7500 ms 00:09:31.534 Doorbell Stride: 4 bytes 00:09:31.534 NVM Subsystem Reset: Not Supported 00:09:31.534 Command Sets Supported 00:09:31.534 NVM Command Set: Supported 00:09:31.534 Boot Partition: Not Supported 00:09:31.534 Memory Page Size Minimum: 4096 bytes 00:09:31.534 Memory Page Size Maximum: 65536 bytes 00:09:31.534 Persistent Memory Region: Not Supported 00:09:31.534 Optional Asynchronous Events Supported 00:09:31.534 Namespace Attribute Notices: Supported 00:09:31.534 Firmware Activation Notices: Not Supported 00:09:31.534 ANA Change Notices: Not Supported 00:09:31.534 PLE Aggregate Log Change Notices: Not Supported 00:09:31.534 LBA Status Info Alert Notices: Not Supported 00:09:31.534 EGE Aggregate Log Change Notices: Not Supported 00:09:31.534 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.534 Zone Descriptor Change Notices: Not Supported 00:09:31.534 Discovery Log Change Notices: Not Supported 00:09:31.534 Controller Attributes 00:09:31.534 128-bit Host Identifier: Not Supported 00:09:31.534 Non-Operational Permissive Mode: Not Supported 00:09:31.534 NVM Sets: Not Supported 00:09:31.534 Read Recovery Levels: Not Supported 00:09:31.534 Endurance Groups: Not Supported 00:09:31.534 Predictable Latency Mode: Not Supported 00:09:31.534 Traffic Based Keep ALive: Not Supported 00:09:31.534 Namespace Granularity: Not Supported 00:09:31.534 SQ Associations: Not Supported 00:09:31.534 UUID List: Not Supported 00:09:31.534 Multi-Domain Subsystem: Not Supported 00:09:31.534 Fixed Capacity Management: Not Supported 00:09:31.534 Variable Capacity Management: Not Supported 00:09:31.534 Delete Endurance Group: Not Supported 00:09:31.534 Delete NVM Set: Not Supported 00:09:31.534 Extended LBA Formats Supported: Supported 00:09:31.534 Flexible Data Placement Supported: Not Supported 00:09:31.534 00:09:31.534 Controller Memory Buffer Support 00:09:31.534 ================================ 00:09:31.534 Supported: No 00:09:31.534 00:09:31.534 Persistent Memory Region Support 00:09:31.534 ================================ 00:09:31.534 Supported: No 00:09:31.534 00:09:31.534 Admin Command Set Attributes 00:09:31.534 ============================ 00:09:31.534 Security Send/Receive: Not Supported 00:09:31.534 Format NVM: Supported 00:09:31.534 Firmware Activate/Download: Not Supported 00:09:31.534 Namespace Management: Supported 00:09:31.534 Device Self-Test: Not Supported 00:09:31.534 Directives: Supported 00:09:31.534 NVMe-MI: Not Supported 00:09:31.534 Virtualization Management: Not Supported 00:09:31.534 Doorbell Buffer Config: Supported 00:09:31.534 Get LBA Status Capability: Not Supported 00:09:31.534 Command & Feature Lockdown Capability: Not Supported 00:09:31.534 Abort Command Limit: 4 00:09:31.534 Async Event Request Limit: 4 00:09:31.534 Number of Firmware Slots: N/A 00:09:31.534 Firmware Slot 1 Read-Only: N/A 00:09:31.534 Firmware Activation Without Reset: N/A 00:09:31.534 Multiple Update Detection Support: N/A 00:09:31.534 Firmware Update Granularity: No Information Provided 00:09:31.534 Per-Namespace SMART Log: Yes 00:09:31.534 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.534 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:31.534 Command Effects Log Page: Supported 00:09:31.534 Get Log Page Extended Data: Supported 00:09:31.534 Telemetry Log Pages: Not Supported 00:09:31.534 Persistent Event Log Pages: Not Supported 00:09:31.534 Supported Log Pages Log Page: May Support 00:09:31.534 Commands Supported & Effects Log Page: Not Supported 00:09:31.534 Feature Identifiers & Effects Log Page:May Support 00:09:31.534 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.534 Data Area 4 for Telemetry Log: Not Supported 00:09:31.534 Error Log Page Entries Supported: 1 00:09:31.534 Keep Alive: Not Supported 00:09:31.534 00:09:31.534 NVM Command Set Attributes 00:09:31.534 ========================== 00:09:31.534 Submission Queue Entry Size 00:09:31.534 Max: 64 00:09:31.534 Min: 64 00:09:31.534 Completion Queue Entry Size 00:09:31.534 Max: 16 00:09:31.534 Min: 16 00:09:31.534 Number of Namespaces: 256 00:09:31.534 Compare Command: Supported 00:09:31.534 Write Uncorrectable Command: Not Supported 00:09:31.534 Dataset Management Command: Supported 00:09:31.534 Write Zeroes Command: Supported 00:09:31.534 Set Features Save Field: Supported 00:09:31.534 Reservations: Not Supported 00:09:31.534 Timestamp: Supported 00:09:31.534 Copy: Supported 00:09:31.534 Volatile Write Cache: Present 00:09:31.534 Atomic Write Unit (Normal): 1 00:09:31.534 Atomic Write Unit (PFail): 1 00:09:31.534 Atomic Compare & Write Unit: 1 00:09:31.534 Fused Compare & Write: Not Supported 00:09:31.534 Scatter-Gather List 00:09:31.534 SGL Command Set: Supported 00:09:31.534 SGL Keyed: Not Supported 00:09:31.534 SGL Bit Bucket Descriptor: Not Supported 00:09:31.534 SGL Metadata Pointer: Not Supported 00:09:31.534 Oversized SGL: Not Supported 00:09:31.534 SGL Metadata Address: Not Supported 00:09:31.534 SGL Offset: Not Supported 00:09:31.534 Transport SGL Data Block: Not Supported 00:09:31.534 Replay Protected Memory Block: Not Supported 00:09:31.534 00:09:31.534 Firmware Slot Information 00:09:31.534 ========================= 00:09:31.534 Active slot: 1 00:09:31.534 Slot 1 Firmware Revision: 1.0 00:09:31.534 00:09:31.534 00:09:31.534 Commands Supported and Effects 00:09:31.534 ============================== 00:09:31.534 Admin Commands 00:09:31.534 -------------- 00:09:31.534 Delete I/O Submission Queue (00h): Supported 00:09:31.534 Create I/O Submission Queue (01h): Supported 00:09:31.534 Get Log Page (02h): Supported 00:09:31.534 Delete I/O Completion Queue (04h): Supported 00:09:31.534 Create I/O Completion Queue (05h): Supported 00:09:31.534 Identify (06h): Supported 00:09:31.534 Abort (08h): Supported 00:09:31.534 Set Features (09h): Supported 00:09:31.534 Get Features (0Ah): Supported 00:09:31.534 Asynchronous Event Request (0Ch): Supported 00:09:31.534 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.535 Directive Send (19h): Supported 00:09:31.535 Directive Receive (1Ah): Supported 00:09:31.535 Virtualization Management (1Ch): Supported 00:09:31.535 Doorbell Buffer Config (7Ch): Supported 00:09:31.535 Format NVM (80h): Supported LBA-Change 00:09:31.535 I/O Commands 00:09:31.535 ------------ 00:09:31.535 Flush (00h): Supported LBA-Change 00:09:31.535 Write (01h): Supported LBA-Change 00:09:31.535 Read (02h): Supported 00:09:31.535 Compare (05h): Supported 00:09:31.535 Write Zeroes (08h): Supported LBA-Change 00:09:31.535 Dataset Management (09h): Supported LBA-Change 00:09:31.535 Unknown (0Ch): Supported 00:09:31.535 [2024-12-15 09:47:20.319819] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63527 terminated unexpected 00:09:31.535 Unknown (12h): Supported 00:09:31.535 Copy (19h): Supported LBA-Change 00:09:31.535 Unknown (1Dh): Supported LBA-Change 00:09:31.535 00:09:31.535 Error Log 00:09:31.535 ========= 00:09:31.535 00:09:31.535 Arbitration 00:09:31.535 =========== 00:09:31.535 Arbitration Burst: no limit 00:09:31.535 00:09:31.535 Power Management 00:09:31.535 ================ 00:09:31.535 Number of Power States: 1 00:09:31.535 Current Power State: Power State #0 00:09:31.535 Power State #0: 00:09:31.535 Max Power: 25.00 W 00:09:31.535 Non-Operational State: Operational 00:09:31.535 Entry Latency: 16 microseconds 00:09:31.535 Exit Latency: 4 microseconds 00:09:31.535 Relative Read Throughput: 0 00:09:31.535 Relative Read Latency: 0 00:09:31.535 Relative Write Throughput: 0 00:09:31.535 Relative Write Latency: 0 00:09:31.535 Idle Power: Not Reported 00:09:31.535 Active Power: Not Reported 00:09:31.535 Non-Operational Permissive Mode: Not Supported 00:09:31.535 00:09:31.535 Health Information 00:09:31.535 ================== 00:09:31.535 Critical Warnings: 00:09:31.535 Available Spare Space: OK 00:09:31.535 Temperature: OK 00:09:31.535 Device Reliability: OK 00:09:31.535 Read Only: No 00:09:31.535 Volatile Memory Backup: OK 00:09:31.535 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.535 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.535 Available Spare: 0% 00:09:31.535 Available Spare Threshold: 0% 00:09:31.535 Life Percentage Used: 0% 00:09:31.535 Data Units Read: 1693 00:09:31.535 Data Units Written: 773 00:09:31.535 Host Read Commands: 79602 00:09:31.535 Host Write Commands: 39476 00:09:31.535 Controller Busy Time: 0 minutes 00:09:31.535 Power Cycles: 0 00:09:31.535 Power On Hours: 0 hours 00:09:31.535 Unsafe Shutdowns: 0 00:09:31.535 Unrecoverable Media Errors: 0 00:09:31.535 Lifetime Error Log Entries: 0 00:09:31.535 Warning Temperature Time: 0 minutes 00:09:31.535 Critical Temperature Time: 0 minutes 00:09:31.535 00:09:31.535 Number of Queues 00:09:31.535 ================ 00:09:31.535 Number of I/O Submission Queues: 64 00:09:31.535 Number of I/O Completion Queues: 64 00:09:31.535 00:09:31.535 ZNS Specific Controller Data 00:09:31.535 ============================ 00:09:31.535 Zone Append Size Limit: 0 00:09:31.535 00:09:31.535 00:09:31.535 Active Namespaces 00:09:31.535 ================= 00:09:31.535 Namespace ID:1 00:09:31.535 Error Recovery Timeout: Unlimited 00:09:31.535 Command Set Identifier: NVM (00h) 00:09:31.535 Deallocate: Supported 00:09:31.535 Deallocated/Unwritten Error: Supported 00:09:31.535 Deallocated Read Value: All 0x00 00:09:31.535 Deallocate in Write Zeroes: Not Supported 00:09:31.535 Deallocated Guard Field: 0xFFFF 00:09:31.535 Flush: Supported 00:09:31.535 Reservation: Not Supported 00:09:31.535 Metadata Transferred as: Separate Metadata Buffer 00:09:31.535 Namespace Sharing Capabilities: Private 00:09:31.535 Size (in LBAs): 1548666 (5GiB) 00:09:31.535 Capacity (in LBAs): 1548666 (5GiB) 00:09:31.535 Utilization (in LBAs): 1548666 (5GiB) 00:09:31.535 Thin Provisioning: Not Supported 00:09:31.535 Per-NS Atomic Units: No 00:09:31.535 Maximum Single Source Range Length: 128 00:09:31.535 Maximum Copy Length: 128 00:09:31.535 Maximum Source Range Count: 128 00:09:31.535 NGUID/EUI64 Never Reused: No 00:09:31.535 Namespace Write Protected: No 00:09:31.535 Number of LBA Formats: 8 00:09:31.535 Current LBA Format: LBA Format #07 00:09:31.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.535 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.535 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.535 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.535 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.535 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.535 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.535 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.535 00:09:31.535 ===================================================== 00:09:31.535 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:31.535 ===================================================== 00:09:31.535 Controller Capabilities/Features 00:09:31.535 ================================ 00:09:31.535 Vendor ID: 1b36 00:09:31.535 Subsystem Vendor ID: 1af4 00:09:31.535 Serial Number: 12341 00:09:31.535 Model Number: QEMU NVMe Ctrl 00:09:31.535 Firmware Version: 8.0.0 00:09:31.535 Recommended Arb Burst: 6 00:09:31.535 IEEE OUI Identifier: 00 54 52 00:09:31.535 Multi-path I/O 00:09:31.535 May have multiple subsystem ports: No 00:09:31.535 May have multiple controllers: No 00:09:31.535 Associated with SR-IOV VF: No 00:09:31.535 Max Data Transfer Size: 524288 00:09:31.535 Max Number of Namespaces: 256 00:09:31.535 Max Number of I/O Queues: 64 00:09:31.535 NVMe Specification Version (VS): 1.4 00:09:31.535 NVMe Specification Version (Identify): 1.4 00:09:31.535 Maximum Queue Entries: 2048 00:09:31.535 Contiguous Queues Required: Yes 00:09:31.535 Arbitration Mechanisms Supported 00:09:31.535 Weighted Round Robin: Not Supported 00:09:31.535 Vendor Specific: Not Supported 00:09:31.535 Reset Timeout: 7500 ms 00:09:31.535 Doorbell Stride: 4 bytes 00:09:31.535 NVM Subsystem Reset: Not Supported 00:09:31.535 Command Sets Supported 00:09:31.535 NVM Command Set: Supported 00:09:31.535 Boot Partition: Not Supported 00:09:31.535 Memory Page Size Minimum: 4096 bytes 00:09:31.535 Memory Page Size Maximum: 65536 bytes 00:09:31.535 Persistent Memory Region: Not Supported 00:09:31.535 Optional Asynchronous Events Supported 00:09:31.535 Namespace Attribute Notices: Supported 00:09:31.535 Firmware Activation Notices: Not Supported 00:09:31.535 ANA Change Notices: Not Supported 00:09:31.535 PLE Aggregate Log Change Notices: Not Supported 00:09:31.535 LBA Status Info Alert Notices: Not Supported 00:09:31.535 EGE Aggregate Log Change Notices: Not Supported 00:09:31.535 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.535 Zone Descriptor Change Notices: Not Supported 00:09:31.535 Discovery Log Change Notices: Not Supported 00:09:31.535 Controller Attributes 00:09:31.535 128-bit Host Identifier: Not Supported 00:09:31.535 Non-Operational Permissive Mode: Not Supported 00:09:31.535 NVM Sets: Not Supported 00:09:31.535 Read Recovery Levels: Not Supported 00:09:31.535 Endurance Groups: Not Supported 00:09:31.535 Predictable Latency Mode: Not Supported 00:09:31.535 Traffic Based Keep ALive: Not Supported 00:09:31.535 Namespace Granularity: Not Supported 00:09:31.535 SQ Associations: Not Supported 00:09:31.535 UUID List: Not Supported 00:09:31.535 Multi-Domain Subsystem: Not Supported 00:09:31.535 Fixed Capacity Management: Not Supported 00:09:31.535 Variable Capacity Management: Not Supported 00:09:31.535 Delete Endurance Group: Not Supported 00:09:31.535 Delete NVM Set: Not Supported 00:09:31.535 Extended LBA Formats Supported: Supported 00:09:31.535 Flexible Data Placement Supported: Not Supported 00:09:31.535 00:09:31.535 Controller Memory Buffer Support 00:09:31.535 ================================ 00:09:31.535 Supported: No 00:09:31.535 00:09:31.535 Persistent Memory Region Support 00:09:31.535 ================================ 00:09:31.535 Supported: No 00:09:31.535 00:09:31.535 Admin Command Set Attributes 00:09:31.535 ============================ 00:09:31.535 Security Send/Receive: Not Supported 00:09:31.535 Format NVM: Supported 00:09:31.535 Firmware Activate/Download: Not Supported 00:09:31.535 Namespace Management: Supported 00:09:31.535 Device Self-Test: Not Supported 00:09:31.535 Directives: Supported 00:09:31.535 NVMe-MI: Not Supported 00:09:31.535 Virtualization Management: Not Supported 00:09:31.535 Doorbell Buffer Config: Supported 00:09:31.535 Get LBA Status Capability: Not Supported 00:09:31.535 Command & Feature Lockdown Capability: Not Supported 00:09:31.535 Abort Command Limit: 4 00:09:31.535 Async Event Request Limit: 4 00:09:31.535 Number of Firmware Slots: N/A 00:09:31.535 Firmware Slot 1 Read-Only: N/A 00:09:31.535 Firmware Activation Without Reset: N/A 00:09:31.535 Multiple Update Detection Support: N/A 00:09:31.535 Firmware Update Granularity: No Information Provided 00:09:31.535 Per-Namespace SMART Log: Yes 00:09:31.535 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.535 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:31.535 Command Effects Log Page: Supported 00:09:31.536 Get Log Page Extended Data: Supported 00:09:31.536 Telemetry Log Pages: Not Supported 00:09:31.536 Persistent Event Log Pages: Not Supported 00:09:31.536 Supported Log Pages Log Page: May Support 00:09:31.536 Commands Supported & Effects Log Page: Not Supported 00:09:31.536 Feature Identifiers & Effects Log Page:May Support 00:09:31.536 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.536 Data Area 4 for Telemetry Log: Not Supported 00:09:31.536 Error Log Page Entries Supported: 1 00:09:31.536 Keep Alive: Not Supported 00:09:31.536 00:09:31.536 NVM Command Set Attributes 00:09:31.536 ========================== 00:09:31.536 Submission Queue Entry Size 00:09:31.536 Max: 64 00:09:31.536 Min: 64 00:09:31.536 Completion Queue Entry Size 00:09:31.536 Max: 16 00:09:31.536 Min: 16 00:09:31.536 Number of Namespaces: 256 00:09:31.536 Compare Command: Supported 00:09:31.536 Write Uncorrectable Command: Not Supported 00:09:31.536 Dataset Management Command: Supported 00:09:31.536 Write Zeroes Command: Supported 00:09:31.536 Set Features Save Field: Supported 00:09:31.536 Reservations: Not Supported 00:09:31.536 Timestamp: Supported 00:09:31.536 Copy: Supported 00:09:31.536 Volatile Write Cache: Present 00:09:31.536 Atomic Write Unit (Normal): 1 00:09:31.536 Atomic Write Unit (PFail): 1 00:09:31.536 Atomic Compare & Write Unit: 1 00:09:31.536 Fused Compare & Write: Not Supported 00:09:31.536 Scatter-Gather List 00:09:31.536 SGL Command Set: Supported 00:09:31.536 SGL Keyed: Not Supported 00:09:31.536 SGL Bit Bucket Descriptor: Not Supported 00:09:31.536 SGL Metadata Pointer: Not Supported 00:09:31.536 Oversized SGL: Not Supported 00:09:31.536 SGL Metadata Address: Not Supported 00:09:31.536 SGL Offset: Not Supported 00:09:31.536 Transport SGL Data Block: Not Supported 00:09:31.536 Replay Protected Memory Block: Not Supported 00:09:31.536 00:09:31.536 Firmware Slot Information 00:09:31.536 ========================= 00:09:31.536 Active slot: 1 00:09:31.536 Slot 1 Firmware Revision: 1.0 00:09:31.536 00:09:31.536 00:09:31.536 Commands Supported and Effects 00:09:31.536 ============================== 00:09:31.536 Admin Commands 00:09:31.536 -------------- 00:09:31.536 Delete I/O Submission Queue (00h): Supported 00:09:31.536 Create I/O Submission Queue (01h): Supported 00:09:31.536 Get Log Page (02h): Supported 00:09:31.536 Delete I/O Completion Queue (04h): Supported 00:09:31.536 Create I/O Completion Queue (05h): Supported 00:09:31.536 Identify (06h): Supported 00:09:31.536 Abort (08h): Supported 00:09:31.536 Set Features (09h): Supported 00:09:31.536 Get Features (0Ah): Supported 00:09:31.536 Asynchronous Event Request (0Ch): Supported 00:09:31.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.536 Directive Send (19h): Supported 00:09:31.536 Directive Receive (1Ah): Supported 00:09:31.536 Virtualization Management (1Ch): Supported 00:09:31.536 Doorbell Buffer Config (7Ch): Supported 00:09:31.536 Format NVM (80h): Supported LBA-Change 00:09:31.536 I/O Commands 00:09:31.536 ------------ 00:09:31.536 Flush (00h): Supported LBA-Change 00:09:31.536 Write (01h): Supported LBA-Change 00:09:31.536 Read (02h): Supported 00:09:31.536 Compare (05h): Supported 00:09:31.536 Write Zeroes (08h): Supported LBA-Change 00:09:31.536 Dataset Management (09h): Supported LBA-Change 00:09:31.536 Unknown (0Ch): Supported 00:09:31.536 Unknown (12h): Supported 00:09:31.536 Copy (19h): Supported LBA-Change 00:09:31.536 Unknown (1Dh): Supported LBA-Change 00:09:31.536 00:09:31.536 Error Log 00:09:31.536 ========= 00:09:31.536 00:09:31.536 Arbitration 00:09:31.536 =========== 00:09:31.536 Arbitration Burst: no limit 00:09:31.536 00:09:31.536 Power Management 00:09:31.536 ================ 00:09:31.536 Number of Power States: 1 00:09:31.536 Current Power State: Power State #0 00:09:31.536 Power State #0: 00:09:31.536 Max Power: 25.00 W 00:09:31.536 Non-Operational State: Operational 00:09:31.536 Entry Latency: 16 microseconds 00:09:31.536 Exit Latency: 4 microseconds 00:09:31.536 Relative Read Throughput: 0 00:09:31.536 Relative Read Latency: 0 00:09:31.536 Relative Write Throughput: 0 00:09:31.536 Relative Write Latency: 0 00:09:31.536 Idle Power: Not Reported 00:09:31.536 Active Power: Not Reported 00:09:31.536 Non-Operational Permissive Mode: Not Supported 00:09:31.536 00:09:31.536 Health Information 00:09:31.536 ================== 00:09:31.536 Critical Warnings: 00:09:31.536 Available Spare Space: OK 00:09:31.536 Temperature: OK 00:09:31.536 Device Reliability: OK 00:09:31.536 Read Only: No 00:09:31.536 Volatile Memory Backup: OK 00:09:31.536 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.536 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.536 Available Spare: 0% 00:09:31.536 Available Spare Threshold: 0% 00:09:31.536 Life Percentage Used: 0% 00:09:31.536 Data Units Read: 1197 00:09:31.536 Data Units Written: [2024-12-15 09:47:20.322073] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63527 terminated unexpected 00:09:31.536 550 00:09:31.536 Host Read Commands: 56187 00:09:31.536 Host Write Commands: 27581 00:09:31.536 Controller Busy Time: 0 minutes 00:09:31.536 Power Cycles: 0 00:09:31.536 Power On Hours: 0 hours 00:09:31.536 Unsafe Shutdowns: 0 00:09:31.536 Unrecoverable Media Errors: 0 00:09:31.536 Lifetime Error Log Entries: 0 00:09:31.536 Warning Temperature Time: 0 minutes 00:09:31.536 Critical Temperature Time: 0 minutes 00:09:31.536 00:09:31.536 Number of Queues 00:09:31.536 ================ 00:09:31.536 Number of I/O Submission Queues: 64 00:09:31.536 Number of I/O Completion Queues: 64 00:09:31.536 00:09:31.536 ZNS Specific Controller Data 00:09:31.536 ============================ 00:09:31.536 Zone Append Size Limit: 0 00:09:31.536 00:09:31.536 00:09:31.536 Active Namespaces 00:09:31.536 ================= 00:09:31.536 Namespace ID:1 00:09:31.536 Error Recovery Timeout: Unlimited 00:09:31.536 Command Set Identifier: NVM (00h) 00:09:31.536 Deallocate: Supported 00:09:31.536 Deallocated/Unwritten Error: Supported 00:09:31.536 Deallocated Read Value: All 0x00 00:09:31.536 Deallocate in Write Zeroes: Not Supported 00:09:31.536 Deallocated Guard Field: 0xFFFF 00:09:31.536 Flush: Supported 00:09:31.536 Reservation: Not Supported 00:09:31.536 Namespace Sharing Capabilities: Private 00:09:31.536 Size (in LBAs): 1310720 (5GiB) 00:09:31.536 Capacity (in LBAs): 1310720 (5GiB) 00:09:31.536 Utilization (in LBAs): 1310720 (5GiB) 00:09:31.536 Thin Provisioning: Not Supported 00:09:31.536 Per-NS Atomic Units: No 00:09:31.536 Maximum Single Source Range Length: 128 00:09:31.536 Maximum Copy Length: 128 00:09:31.536 Maximum Source Range Count: 128 00:09:31.536 NGUID/EUI64 Never Reused: No 00:09:31.536 Namespace Write Protected: No 00:09:31.536 Number of LBA Formats: 8 00:09:31.536 Current LBA Format: LBA Format #04 00:09:31.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.536 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.536 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.536 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.536 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.536 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.536 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.536 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.536 00:09:31.536 ===================================================== 00:09:31.536 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:31.536 ===================================================== 00:09:31.536 Controller Capabilities/Features 00:09:31.536 ================================ 00:09:31.536 Vendor ID: 1b36 00:09:31.536 Subsystem Vendor ID: 1af4 00:09:31.536 Serial Number: 12342 00:09:31.536 Model Number: QEMU NVMe Ctrl 00:09:31.536 Firmware Version: 8.0.0 00:09:31.536 Recommended Arb Burst: 6 00:09:31.536 IEEE OUI Identifier: 00 54 52 00:09:31.536 Multi-path I/O 00:09:31.536 May have multiple subsystem ports: No 00:09:31.536 May have multiple controllers: No 00:09:31.536 Associated with SR-IOV VF: No 00:09:31.536 Max Data Transfer Size: 524288 00:09:31.536 Max Number of Namespaces: 256 00:09:31.536 Max Number of I/O Queues: 64 00:09:31.536 NVMe Specification Version (VS): 1.4 00:09:31.536 NVMe Specification Version (Identify): 1.4 00:09:31.536 Maximum Queue Entries: 2048 00:09:31.536 Contiguous Queues Required: Yes 00:09:31.537 Arbitration Mechanisms Supported 00:09:31.537 Weighted Round Robin: Not Supported 00:09:31.537 Vendor Specific: Not Supported 00:09:31.537 Reset Timeout: 7500 ms 00:09:31.537 Doorbell Stride: 4 bytes 00:09:31.537 NVM Subsystem Reset: Not Supported 00:09:31.537 Command Sets Supported 00:09:31.537 NVM Command Set: Supported 00:09:31.537 Boot Partition: Not Supported 00:09:31.537 Memory Page Size Minimum: 4096 bytes 00:09:31.537 Memory Page Size Maximum: 65536 bytes 00:09:31.537 Persistent Memory Region: Not Supported 00:09:31.537 Optional Asynchronous Events Supported 00:09:31.537 Namespace Attribute Notices: Supported 00:09:31.537 Firmware Activation Notices: Not Supported 00:09:31.537 ANA Change Notices: Not Supported 00:09:31.537 PLE Aggregate Log Change Notices: Not Supported 00:09:31.537 LBA Status Info Alert Notices: Not Supported 00:09:31.537 EGE Aggregate Log Change Notices: Not Supported 00:09:31.537 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.537 Zone Descriptor Change Notices: Not Supported 00:09:31.537 Discovery Log Change Notices: Not Supported 00:09:31.537 Controller Attributes 00:09:31.537 128-bit Host Identifier: Not Supported 00:09:31.537 Non-Operational Permissive Mode: Not Supported 00:09:31.537 NVM Sets: Not Supported 00:09:31.537 Read Recovery Levels: Not Supported 00:09:31.537 Endurance Groups: Not Supported 00:09:31.537 Predictable Latency Mode: Not Supported 00:09:31.537 Traffic Based Keep ALive: Not Supported 00:09:31.537 Namespace Granularity: Not Supported 00:09:31.537 SQ Associations: Not Supported 00:09:31.537 UUID List: Not Supported 00:09:31.537 Multi-Domain Subsystem: Not Supported 00:09:31.537 Fixed Capacity Management: Not Supported 00:09:31.537 Variable Capacity Management: Not Supported 00:09:31.537 Delete Endurance Group: Not Supported 00:09:31.537 Delete NVM Set: Not Supported 00:09:31.537 Extended LBA Formats Supported: Supported 00:09:31.537 Flexible Data Placement Supported: Not Supported 00:09:31.537 00:09:31.537 Controller Memory Buffer Support 00:09:31.537 ================================ 00:09:31.537 Supported: No 00:09:31.537 00:09:31.537 Persistent Memory Region Support 00:09:31.537 ================================ 00:09:31.537 Supported: No 00:09:31.537 00:09:31.537 Admin Command Set Attributes 00:09:31.537 ============================ 00:09:31.537 Security Send/Receive: Not Supported 00:09:31.537 Format NVM: Supported 00:09:31.537 Firmware Activate/Download: Not Supported 00:09:31.537 Namespace Management: Supported 00:09:31.537 Device Self-Test: Not Supported 00:09:31.537 Directives: Supported 00:09:31.537 NVMe-MI: Not Supported 00:09:31.537 Virtualization Management: Not Supported 00:09:31.537 Doorbell Buffer Config: Supported 00:09:31.537 Get LBA Status Capability: Not Supported 00:09:31.537 Command & Feature Lockdown Capability: Not Supported 00:09:31.537 Abort Command Limit: 4 00:09:31.537 Async Event Request Limit: 4 00:09:31.537 Number of Firmware Slots: N/A 00:09:31.537 Firmware Slot 1 Read-Only: N/A 00:09:31.537 Firmware Activation Without Reset: N/A 00:09:31.537 Multiple Update Detection Support: N/A 00:09:31.537 Firmware Update Granularity: No Information Provided 00:09:31.537 Per-Namespace SMART Log: Yes 00:09:31.537 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.537 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:31.537 Command Effects Log Page: Supported 00:09:31.537 Get Log Page Extended Data: Supported 00:09:31.537 Telemetry Log Pages: Not Supported 00:09:31.537 Persistent Event Log Pages: Not Supported 00:09:31.537 Supported Log Pages Log Page: May Support 00:09:31.537 Commands Supported & Effects Log Page: Not Supported 00:09:31.537 Feature Identifiers & Effects Log Page:May Support 00:09:31.537 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.537 Data Area 4 for Telemetry Log: Not Supported 00:09:31.537 Error Log Page Entries Supported: 1 00:09:31.537 Keep Alive: Not Supported 00:09:31.537 00:09:31.537 NVM Command Set Attributes 00:09:31.537 ========================== 00:09:31.537 Submission Queue Entry Size 00:09:31.537 Max: 64 00:09:31.537 Min: 64 00:09:31.537 Completion Queue Entry Size 00:09:31.537 Max: 16 00:09:31.537 Min: 16 00:09:31.537 Number of Namespaces: 256 00:09:31.537 Compare Command: Supported 00:09:31.537 Write Uncorrectable Command: Not Supported 00:09:31.537 Dataset Management Command: Supported 00:09:31.537 Write Zeroes Command: Supported 00:09:31.537 Set Features Save Field: Supported 00:09:31.537 Reservations: Not Supported 00:09:31.537 Timestamp: Supported 00:09:31.537 Copy: Supported 00:09:31.537 Volatile Write Cache: Present 00:09:31.537 Atomic Write Unit (Normal): 1 00:09:31.537 Atomic Write Unit (PFail): 1 00:09:31.537 Atomic Compare & Write Unit: 1 00:09:31.537 Fused Compare & Write: Not Supported 00:09:31.537 Scatter-Gather List 00:09:31.537 SGL Command Set: Supported 00:09:31.537 SGL Keyed: Not Supported 00:09:31.537 SGL Bit Bucket Descriptor: Not Supported 00:09:31.537 SGL Metadata Pointer: Not Supported 00:09:31.537 Oversized SGL: Not Supported 00:09:31.537 SGL Metadata Address: Not Supported 00:09:31.537 SGL Offset: Not Supported 00:09:31.537 Transport SGL Data Block: Not Supported 00:09:31.537 Replay Protected Memory Block: Not Supported 00:09:31.537 00:09:31.537 Firmware Slot Information 00:09:31.537 ========================= 00:09:31.537 Active slot: 1 00:09:31.537 Slot 1 Firmware Revision: 1.0 00:09:31.537 00:09:31.537 00:09:31.537 Commands Supported and Effects 00:09:31.537 ============================== 00:09:31.537 Admin Commands 00:09:31.537 -------------- 00:09:31.537 Delete I/O Submission Queue (00h): Supported 00:09:31.537 Create I/O Submission Queue (01h): Supported 00:09:31.537 Get Log Page (02h): Supported 00:09:31.537 Delete I/O Completion Queue (04h): Supported 00:09:31.537 Create I/O Completion Queue (05h): Supported 00:09:31.537 Identify (06h): Supported 00:09:31.537 Abort (08h): Supported 00:09:31.537 Set Features (09h): Supported 00:09:31.537 Get Features (0Ah): Supported 00:09:31.537 Asynchronous Event Request (0Ch): Supported 00:09:31.537 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.537 Directive Send (19h): Supported 00:09:31.537 Directive Receive (1Ah): Supported 00:09:31.537 Virtualization Management (1Ch): Supported 00:09:31.537 Doorbell Buffer Config (7Ch): Supported 00:09:31.537 Format NVM (80h): Supported LBA-Change 00:09:31.537 I/O Commands 00:09:31.537 ------------ 00:09:31.537 Flush (00h): Supported LBA-Change 00:09:31.537 Write (01h): Supported LBA-Change 00:09:31.537 Read (02h): Supported 00:09:31.537 Compare (05h): Supported 00:09:31.537 Write Zeroes (08h): Supported LBA-Change 00:09:31.537 Dataset Management (09h): Supported LBA-Change 00:09:31.537 Unknown (0Ch): Supported 00:09:31.537 Unknown (12h): Supported 00:09:31.537 Copy (19h): Supported LBA-Change 00:09:31.537 Unknown (1Dh): Supported LBA-Change 00:09:31.537 00:09:31.537 Error Log 00:09:31.537 ========= 00:09:31.537 00:09:31.537 Arbitration 00:09:31.537 =========== 00:09:31.537 Arbitration Burst: no limit 00:09:31.537 00:09:31.537 Power Management 00:09:31.537 ================ 00:09:31.537 Number of Power States: 1 00:09:31.537 Current Power State: Power State #0 00:09:31.537 Power State #0: 00:09:31.537 Max Power: 25.00 W 00:09:31.538 Non-Operational State: Operational 00:09:31.538 Entry Latency: 16 microseconds 00:09:31.538 Exit Latency: 4 microseconds 00:09:31.538 Relative Read Throughput: 0 00:09:31.538 Relative Read Latency: 0 00:09:31.538 Relative Write Throughput: 0 00:09:31.538 Relative Write Latency: 0 00:09:31.538 Idle Power: Not Reported 00:09:31.538 Active Power: Not Reported 00:09:31.538 Non-Operational Permissive Mode: Not Supported 00:09:31.538 00:09:31.538 Health Information 00:09:31.538 ================== 00:09:31.538 Critical Warnings: 00:09:31.538 Available Spare Space: OK 00:09:31.538 Temperature: OK 00:09:31.538 Device Reliability: OK 00:09:31.538 Read Only: No 00:09:31.538 Volatile Memory Backup: OK 00:09:31.538 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.538 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.538 Available Spare: 0% 00:09:31.538 Available Spare Threshold: 0% 00:09:31.538 Life Percentage Used: 0% 00:09:31.538 Data Units Read: 3756 00:09:31.538 Data Units Written: 1725 00:09:31.538 Host Read Commands: 170815 00:09:31.538 Host Write Commands: 83657 00:09:31.538 Controller Busy Time: 0 minutes 00:09:31.538 Power Cycles: 0 00:09:31.538 Power On Hours: 0 hours 00:09:31.538 Unsafe Shutdowns: 0 00:09:31.538 Unrecoverable Media Errors: 0 00:09:31.538 Lifetime Error Log Entries: 0 00:09:31.538 Warning Temperature Time: 0 minutes 00:09:31.538 Critical Temperature Time: 0 minutes 00:09:31.538 00:09:31.538 Number of Queues 00:09:31.538 ================ 00:09:31.538 Number of I/O Submission Queues: 64 00:09:31.538 Number of I/O Completion Queues: 64 00:09:31.538 00:09:31.538 ZNS Specific Controller Data 00:09:31.538 ============================ 00:09:31.538 Zone Append Size Limit: 0 00:09:31.538 00:09:31.538 00:09:31.538 Active Namespaces 00:09:31.538 ================= 00:09:31.538 Namespace ID:1 00:09:31.538 Error Recovery Timeout: Unlimited 00:09:31.538 Command Set Identifier: NVM (00h) 00:09:31.538 Deallocate: Supported 00:09:31.538 Deallocated/Unwritten Error: Supported 00:09:31.538 Deallocated Read Value: All 0x00 00:09:31.538 Deallocate in Write Zeroes: Not Supported 00:09:31.538 Deallocated Guard Field: 0xFFFF 00:09:31.538 Flush: Supported 00:09:31.538 Reservation: Not Supported 00:09:31.538 Namespace Sharing Capabilities: Private 00:09:31.538 Size (in LBAs): 1048576 (4GiB) 00:09:31.538 Capacity (in LBAs): 1048576 (4GiB) 00:09:31.538 Utilization (in LBAs): 1048576 (4GiB) 00:09:31.538 Thin Provisioning: Not Supported 00:09:31.538 Per-NS Atomic Units: No 00:09:31.538 Maximum Single Source Range Length: 128 00:09:31.538 Maximum Copy Length: 128 00:09:31.538 Maximum Source Range Count: 128 00:09:31.538 NGUID/EUI64 Never Reused: No 00:09:31.538 Namespace Write Protected: No 00:09:31.538 Number of LBA Formats: 8 00:09:31.538 Current LBA Format: LBA Format #04 00:09:31.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.538 00:09:31.538 Namespace ID:2 00:09:31.538 Error Recovery Timeout: Unlimited 00:09:31.538 Command Set Identifier: NVM (00h) 00:09:31.538 Deallocate: Supported 00:09:31.538 Deallocated/Unwritten Error: Supported 00:09:31.538 Deallocated Read Value: All 0x00 00:09:31.538 Deallocate in Write Zeroes: Not Supported 00:09:31.538 Deallocated Guard Field: 0xFFFF 00:09:31.538 Flush: Supported 00:09:31.538 Reservation: Not Supported 00:09:31.538 Namespace Sharing Capabilities: Private 00:09:31.538 Size (in LBAs): 1048576 (4GiB) 00:09:31.538 Capacity (in LBAs): 1048576 (4GiB) 00:09:31.538 Utilization (in LBAs): 1048576 (4GiB) 00:09:31.538 Thin Provisioning: Not Supported 00:09:31.538 Per-NS Atomic Units: No 00:09:31.538 Maximum Single Source Range Length: 128 00:09:31.538 Maximum Copy Length: 128 00:09:31.538 Maximum Source Range Count: 128 00:09:31.538 NGUID/EUI64 Never Reused: No 00:09:31.538 Namespace Write Protected: No 00:09:31.538 Number of LBA Formats: 8 00:09:31.538 Current LBA Format: LBA Format #04 00:09:31.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.538 00:09:31.538 Namespace ID:3 00:09:31.538 Error Recovery Timeout: Unlimited 00:09:31.538 Command Set Identifier: NVM (00h) 00:09:31.538 Deallocate: Supported 00:09:31.538 Deallocated/Unwritten Error: Supported 00:09:31.538 Deallocated Read Value: All 0x00 00:09:31.538 Deallocate in Write Zeroes: Not Supported 00:09:31.538 Deallocated Guard Field: 0xFFFF 00:09:31.538 Flush: Supported 00:09:31.538 Reservation: Not Supported 00:09:31.538 Namespace Sharing Capabilities: Private 00:09:31.538 Size (in LBAs): 1048576 (4GiB) 00:09:31.538 Capacity (in LBAs): 1048576 (4GiB) 00:09:31.538 Utilization (in LBAs): 1048576 (4GiB) 00:09:31.538 Thin Provisioning: Not Supported 00:09:31.538 Per-NS Atomic Units: No 00:09:31.538 Maximum Single Source Range Length: 128 00:09:31.538 Maximum Copy Length: 128 00:09:31.538 Maximum Source Range Count: 128 00:09:31.538 NGUID/EUI64 Never Reused: No 00:09:31.538 Namespace Write Protected: No 00:09:31.538 Number of LBA Formats: 8 00:09:31.538 Current LBA Format: LBA Format #04 00:09:31.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.538 00:09:31.538 09:47:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:31.538 09:47:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:31.538 ===================================================== 00:09:31.538 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:31.538 ===================================================== 00:09:31.538 Controller Capabilities/Features 00:09:31.538 ================================ 00:09:31.538 Vendor ID: 1b36 00:09:31.538 Subsystem Vendor ID: 1af4 00:09:31.538 Serial Number: 12340 00:09:31.538 Model Number: QEMU NVMe Ctrl 00:09:31.538 Firmware Version: 8.0.0 00:09:31.538 Recommended Arb Burst: 6 00:09:31.538 IEEE OUI Identifier: 00 54 52 00:09:31.538 Multi-path I/O 00:09:31.538 May have multiple subsystem ports: No 00:09:31.538 May have multiple controllers: No 00:09:31.538 Associated with SR-IOV VF: No 00:09:31.538 Max Data Transfer Size: 524288 00:09:31.538 Max Number of Namespaces: 256 00:09:31.538 Max Number of I/O Queues: 64 00:09:31.538 NVMe Specification Version (VS): 1.4 00:09:31.538 NVMe Specification Version (Identify): 1.4 00:09:31.538 Maximum Queue Entries: 2048 00:09:31.538 Contiguous Queues Required: Yes 00:09:31.538 Arbitration Mechanisms Supported 00:09:31.538 Weighted Round Robin: Not Supported 00:09:31.538 Vendor Specific: Not Supported 00:09:31.538 Reset Timeout: 7500 ms 00:09:31.538 Doorbell Stride: 4 bytes 00:09:31.538 NVM Subsystem Reset: Not Supported 00:09:31.538 Command Sets Supported 00:09:31.538 NVM Command Set: Supported 00:09:31.538 Boot Partition: Not Supported 00:09:31.538 Memory Page Size Minimum: 4096 bytes 00:09:31.538 Memory Page Size Maximum: 65536 bytes 00:09:31.538 Persistent Memory Region: Not Supported 00:09:31.538 Optional Asynchronous Events Supported 00:09:31.538 Namespace Attribute Notices: Supported 00:09:31.538 Firmware Activation Notices: Not Supported 00:09:31.539 ANA Change Notices: Not Supported 00:09:31.539 PLE Aggregate Log Change Notices: Not Supported 00:09:31.539 LBA Status Info Alert Notices: Not Supported 00:09:31.539 EGE Aggregate Log Change Notices: Not Supported 00:09:31.539 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.539 Zone Descriptor Change Notices: Not Supported 00:09:31.539 Discovery Log Change Notices: Not Supported 00:09:31.539 Controller Attributes 00:09:31.539 128-bit Host Identifier: Not Supported 00:09:31.539 Non-Operational Permissive Mode: Not Supported 00:09:31.539 NVM Sets: Not Supported 00:09:31.539 Read Recovery Levels: Not Supported 00:09:31.539 Endurance Groups: Not Supported 00:09:31.539 Predictable Latency Mode: Not Supported 00:09:31.539 Traffic Based Keep ALive: Not Supported 00:09:31.539 Namespace Granularity: Not Supported 00:09:31.539 SQ Associations: Not Supported 00:09:31.539 UUID List: Not Supported 00:09:31.539 Multi-Domain Subsystem: Not Supported 00:09:31.539 Fixed Capacity Management: Not Supported 00:09:31.539 Variable Capacity Management: Not Supported 00:09:31.539 Delete Endurance Group: Not Supported 00:09:31.539 Delete NVM Set: Not Supported 00:09:31.539 Extended LBA Formats Supported: Supported 00:09:31.539 Flexible Data Placement Supported: Not Supported 00:09:31.539 00:09:31.539 Controller Memory Buffer Support 00:09:31.539 ================================ 00:09:31.539 Supported: No 00:09:31.539 00:09:31.539 Persistent Memory Region Support 00:09:31.539 ================================ 00:09:31.539 Supported: No 00:09:31.539 00:09:31.539 Admin Command Set Attributes 00:09:31.539 ============================ 00:09:31.539 Security Send/Receive: Not Supported 00:09:31.539 Format NVM: Supported 00:09:31.539 Firmware Activate/Download: Not Supported 00:09:31.539 Namespace Management: Supported 00:09:31.539 Device Self-Test: Not Supported 00:09:31.539 Directives: Supported 00:09:31.539 NVMe-MI: Not Supported 00:09:31.539 Virtualization Management: Not Supported 00:09:31.539 Doorbell Buffer Config: Supported 00:09:31.539 Get LBA Status Capability: Not Supported 00:09:31.539 Command & Feature Lockdown Capability: Not Supported 00:09:31.539 Abort Command Limit: 4 00:09:31.539 Async Event Request Limit: 4 00:09:31.539 Number of Firmware Slots: N/A 00:09:31.539 Firmware Slot 1 Read-Only: N/A 00:09:31.539 Firmware Activation Without Reset: N/A 00:09:31.539 Multiple Update Detection Support: N/A 00:09:31.539 Firmware Update Granularity: No Information Provided 00:09:31.539 Per-Namespace SMART Log: Yes 00:09:31.539 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.539 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:31.539 Command Effects Log Page: Supported 00:09:31.539 Get Log Page Extended Data: Supported 00:09:31.539 Telemetry Log Pages: Not Supported 00:09:31.539 Persistent Event Log Pages: Not Supported 00:09:31.539 Supported Log Pages Log Page: May Support 00:09:31.539 Commands Supported & Effects Log Page: Not Supported 00:09:31.539 Feature Identifiers & Effects Log Page:May Support 00:09:31.539 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.539 Data Area 4 for Telemetry Log: Not Supported 00:09:31.539 Error Log Page Entries Supported: 1 00:09:31.539 Keep Alive: Not Supported 00:09:31.539 00:09:31.539 NVM Command Set Attributes 00:09:31.539 ========================== 00:09:31.539 Submission Queue Entry Size 00:09:31.539 Max: 64 00:09:31.539 Min: 64 00:09:31.539 Completion Queue Entry Size 00:09:31.539 Max: 16 00:09:31.539 Min: 16 00:09:31.539 Number of Namespaces: 256 00:09:31.539 Compare Command: Supported 00:09:31.539 Write Uncorrectable Command: Not Supported 00:09:31.539 Dataset Management Command: Supported 00:09:31.539 Write Zeroes Command: Supported 00:09:31.539 Set Features Save Field: Supported 00:09:31.539 Reservations: Not Supported 00:09:31.539 Timestamp: Supported 00:09:31.539 Copy: Supported 00:09:31.539 Volatile Write Cache: Present 00:09:31.539 Atomic Write Unit (Normal): 1 00:09:31.539 Atomic Write Unit (PFail): 1 00:09:31.539 Atomic Compare & Write Unit: 1 00:09:31.539 Fused Compare & Write: Not Supported 00:09:31.539 Scatter-Gather List 00:09:31.539 SGL Command Set: Supported 00:09:31.539 SGL Keyed: Not Supported 00:09:31.539 SGL Bit Bucket Descriptor: Not Supported 00:09:31.539 SGL Metadata Pointer: Not Supported 00:09:31.539 Oversized SGL: Not Supported 00:09:31.539 SGL Metadata Address: Not Supported 00:09:31.539 SGL Offset: Not Supported 00:09:31.539 Transport SGL Data Block: Not Supported 00:09:31.539 Replay Protected Memory Block: Not Supported 00:09:31.539 00:09:31.539 Firmware Slot Information 00:09:31.539 ========================= 00:09:31.539 Active slot: 1 00:09:31.539 Slot 1 Firmware Revision: 1.0 00:09:31.539 00:09:31.539 00:09:31.539 Commands Supported and Effects 00:09:31.539 ============================== 00:09:31.539 Admin Commands 00:09:31.539 -------------- 00:09:31.539 Delete I/O Submission Queue (00h): Supported 00:09:31.539 Create I/O Submission Queue (01h): Supported 00:09:31.539 Get Log Page (02h): Supported 00:09:31.539 Delete I/O Completion Queue (04h): Supported 00:09:31.539 Create I/O Completion Queue (05h): Supported 00:09:31.539 Identify (06h): Supported 00:09:31.539 Abort (08h): Supported 00:09:31.539 Set Features (09h): Supported 00:09:31.539 Get Features (0Ah): Supported 00:09:31.539 Asynchronous Event Request (0Ch): Supported 00:09:31.539 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.539 Directive Send (19h): Supported 00:09:31.539 Directive Receive (1Ah): Supported 00:09:31.539 Virtualization Management (1Ch): Supported 00:09:31.539 Doorbell Buffer Config (7Ch): Supported 00:09:31.539 Format NVM (80h): Supported LBA-Change 00:09:31.539 I/O Commands 00:09:31.539 ------------ 00:09:31.539 Flush (00h): Supported LBA-Change 00:09:31.539 Write (01h): Supported LBA-Change 00:09:31.539 Read (02h): Supported 00:09:31.539 Compare (05h): Supported 00:09:31.539 Write Zeroes (08h): Supported LBA-Change 00:09:31.539 Dataset Management (09h): Supported LBA-Change 00:09:31.539 Unknown (0Ch): Supported 00:09:31.539 Unknown (12h): Supported 00:09:31.539 Copy (19h): Supported LBA-Change 00:09:31.539 Unknown (1Dh): Supported LBA-Change 00:09:31.539 00:09:31.539 Error Log 00:09:31.539 ========= 00:09:31.539 00:09:31.539 Arbitration 00:09:31.539 =========== 00:09:31.539 Arbitration Burst: no limit 00:09:31.539 00:09:31.539 Power Management 00:09:31.539 ================ 00:09:31.539 Number of Power States: 1 00:09:31.539 Current Power State: Power State #0 00:09:31.539 Power State #0: 00:09:31.539 Max Power: 25.00 W 00:09:31.539 Non-Operational State: Operational 00:09:31.539 Entry Latency: 16 microseconds 00:09:31.539 Exit Latency: 4 microseconds 00:09:31.539 Relative Read Throughput: 0 00:09:31.539 Relative Read Latency: 0 00:09:31.539 Relative Write Throughput: 0 00:09:31.539 Relative Write Latency: 0 00:09:31.801 Idle Power: Not Reported 00:09:31.801 Active Power: Not Reported 00:09:31.801 Non-Operational Permissive Mode: Not Supported 00:09:31.801 00:09:31.801 Health Information 00:09:31.801 ================== 00:09:31.801 Critical Warnings: 00:09:31.801 Available Spare Space: OK 00:09:31.801 Temperature: OK 00:09:31.801 Device Reliability: OK 00:09:31.801 Read Only: No 00:09:31.801 Volatile Memory Backup: OK 00:09:31.801 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.801 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.801 Available Spare: 0% 00:09:31.801 Available Spare Threshold: 0% 00:09:31.801 Life Percentage Used: 0% 00:09:31.801 Data Units Read: 1693 00:09:31.801 Data Units Written: 773 00:09:31.801 Host Read Commands: 79602 00:09:31.801 Host Write Commands: 39476 00:09:31.801 Controller Busy Time: 0 minutes 00:09:31.801 Power Cycles: 0 00:09:31.801 Power On Hours: 0 hours 00:09:31.801 Unsafe Shutdowns: 0 00:09:31.801 Unrecoverable Media Errors: 0 00:09:31.801 Lifetime Error Log Entries: 0 00:09:31.801 Warning Temperature Time: 0 minutes 00:09:31.801 Critical Temperature Time: 0 minutes 00:09:31.801 00:09:31.801 Number of Queues 00:09:31.801 ================ 00:09:31.801 Number of I/O Submission Queues: 64 00:09:31.801 Number of I/O Completion Queues: 64 00:09:31.801 00:09:31.801 ZNS Specific Controller Data 00:09:31.801 ============================ 00:09:31.801 Zone Append Size Limit: 0 00:09:31.801 00:09:31.801 00:09:31.801 Active Namespaces 00:09:31.801 ================= 00:09:31.801 Namespace ID:1 00:09:31.801 Error Recovery Timeout: Unlimited 00:09:31.801 Command Set Identifier: NVM (00h) 00:09:31.801 Deallocate: Supported 00:09:31.801 Deallocated/Unwritten Error: Supported 00:09:31.801 Deallocated Read Value: All 0x00 00:09:31.801 Deallocate in Write Zeroes: Not Supported 00:09:31.801 Deallocated Guard Field: 0xFFFF 00:09:31.801 Flush: Supported 00:09:31.801 Reservation: Not Supported 00:09:31.801 Metadata Transferred as: Separate Metadata Buffer 00:09:31.801 Namespace Sharing Capabilities: Private 00:09:31.801 Size (in LBAs): 1548666 (5GiB) 00:09:31.801 Capacity (in LBAs): 1548666 (5GiB) 00:09:31.801 Utilization (in LBAs): 1548666 (5GiB) 00:09:31.801 Thin Provisioning: Not Supported 00:09:31.801 Per-NS Atomic Units: No 00:09:31.801 Maximum Single Source Range Length: 128 00:09:31.801 Maximum Copy Length: 128 00:09:31.801 Maximum Source Range Count: 128 00:09:31.801 NGUID/EUI64 Never Reused: No 00:09:31.801 Namespace Write Protected: No 00:09:31.801 Number of LBA Formats: 8 00:09:31.801 Current LBA Format: LBA Format #07 00:09:31.801 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.801 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.801 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.801 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.801 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.801 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.801 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.801 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.801 00:09:31.801 09:47:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:31.801 09:47:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:31.801 ===================================================== 00:09:31.801 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:31.801 ===================================================== 00:09:31.801 Controller Capabilities/Features 00:09:31.801 ================================ 00:09:31.801 Vendor ID: 1b36 00:09:31.801 Subsystem Vendor ID: 1af4 00:09:31.801 Serial Number: 12341 00:09:31.801 Model Number: QEMU NVMe Ctrl 00:09:31.801 Firmware Version: 8.0.0 00:09:31.801 Recommended Arb Burst: 6 00:09:31.801 IEEE OUI Identifier: 00 54 52 00:09:31.801 Multi-path I/O 00:09:31.801 May have multiple subsystem ports: No 00:09:31.801 May have multiple controllers: No 00:09:31.802 Associated with SR-IOV VF: No 00:09:31.802 Max Data Transfer Size: 524288 00:09:31.802 Max Number of Namespaces: 256 00:09:31.802 Max Number of I/O Queues: 64 00:09:31.802 NVMe Specification Version (VS): 1.4 00:09:31.802 NVMe Specification Version (Identify): 1.4 00:09:31.802 Maximum Queue Entries: 2048 00:09:31.802 Contiguous Queues Required: Yes 00:09:31.802 Arbitration Mechanisms Supported 00:09:31.802 Weighted Round Robin: Not Supported 00:09:31.802 Vendor Specific: Not Supported 00:09:31.802 Reset Timeout: 7500 ms 00:09:31.802 Doorbell Stride: 4 bytes 00:09:31.802 NVM Subsystem Reset: Not Supported 00:09:31.802 Command Sets Supported 00:09:31.802 NVM Command Set: Supported 00:09:31.802 Boot Partition: Not Supported 00:09:31.802 Memory Page Size Minimum: 4096 bytes 00:09:31.802 Memory Page Size Maximum: 65536 bytes 00:09:31.802 Persistent Memory Region: Not Supported 00:09:31.802 Optional Asynchronous Events Supported 00:09:31.802 Namespace Attribute Notices: Supported 00:09:31.802 Firmware Activation Notices: Not Supported 00:09:31.802 ANA Change Notices: Not Supported 00:09:31.802 PLE Aggregate Log Change Notices: Not Supported 00:09:31.802 LBA Status Info Alert Notices: Not Supported 00:09:31.802 EGE Aggregate Log Change Notices: Not Supported 00:09:31.802 Normal NVM Subsystem Shutdown event: Not Supported 00:09:31.802 Zone Descriptor Change Notices: Not Supported 00:09:31.802 Discovery Log Change Notices: Not Supported 00:09:31.802 Controller Attributes 00:09:31.802 128-bit Host Identifier: Not Supported 00:09:31.802 Non-Operational Permissive Mode: Not Supported 00:09:31.802 NVM Sets: Not Supported 00:09:31.802 Read Recovery Levels: Not Supported 00:09:31.802 Endurance Groups: Not Supported 00:09:31.802 Predictable Latency Mode: Not Supported 00:09:31.802 Traffic Based Keep ALive: Not Supported 00:09:31.802 Namespace Granularity: Not Supported 00:09:31.802 SQ Associations: Not Supported 00:09:31.802 UUID List: Not Supported 00:09:31.802 Multi-Domain Subsystem: Not Supported 00:09:31.802 Fixed Capacity Management: Not Supported 00:09:31.802 Variable Capacity Management: Not Supported 00:09:31.802 Delete Endurance Group: Not Supported 00:09:31.802 Delete NVM Set: Not Supported 00:09:31.802 Extended LBA Formats Supported: Supported 00:09:31.802 Flexible Data Placement Supported: Not Supported 00:09:31.802 00:09:31.802 Controller Memory Buffer Support 00:09:31.802 ================================ 00:09:31.802 Supported: No 00:09:31.802 00:09:31.802 Persistent Memory Region Support 00:09:31.802 ================================ 00:09:31.802 Supported: No 00:09:31.802 00:09:31.802 Admin Command Set Attributes 00:09:31.802 ============================ 00:09:31.802 Security Send/Receive: Not Supported 00:09:31.802 Format NVM: Supported 00:09:31.802 Firmware Activate/Download: Not Supported 00:09:31.802 Namespace Management: Supported 00:09:31.802 Device Self-Test: Not Supported 00:09:31.802 Directives: Supported 00:09:31.802 NVMe-MI: Not Supported 00:09:31.802 Virtualization Management: Not Supported 00:09:31.802 Doorbell Buffer Config: Supported 00:09:31.802 Get LBA Status Capability: Not Supported 00:09:31.802 Command & Feature Lockdown Capability: Not Supported 00:09:31.802 Abort Command Limit: 4 00:09:31.802 Async Event Request Limit: 4 00:09:31.802 Number of Firmware Slots: N/A 00:09:31.802 Firmware Slot 1 Read-Only: N/A 00:09:31.802 Firmware Activation Without Reset: N/A 00:09:31.802 Multiple Update Detection Support: N/A 00:09:31.802 Firmware Update Granularity: No Information Provided 00:09:31.802 Per-Namespace SMART Log: Yes 00:09:31.802 Asymmetric Namespace Access Log Page: Not Supported 00:09:31.802 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:31.802 Command Effects Log Page: Supported 00:09:31.802 Get Log Page Extended Data: Supported 00:09:31.802 Telemetry Log Pages: Not Supported 00:09:31.802 Persistent Event Log Pages: Not Supported 00:09:31.802 Supported Log Pages Log Page: May Support 00:09:31.802 Commands Supported & Effects Log Page: Not Supported 00:09:31.802 Feature Identifiers & Effects Log Page:May Support 00:09:31.802 NVMe-MI Commands & Effects Log Page: May Support 00:09:31.802 Data Area 4 for Telemetry Log: Not Supported 00:09:31.802 Error Log Page Entries Supported: 1 00:09:31.802 Keep Alive: Not Supported 00:09:31.802 00:09:31.802 NVM Command Set Attributes 00:09:31.802 ========================== 00:09:31.802 Submission Queue Entry Size 00:09:31.802 Max: 64 00:09:31.802 Min: 64 00:09:31.802 Completion Queue Entry Size 00:09:31.802 Max: 16 00:09:31.802 Min: 16 00:09:31.802 Number of Namespaces: 256 00:09:31.802 Compare Command: Supported 00:09:31.802 Write Uncorrectable Command: Not Supported 00:09:31.802 Dataset Management Command: Supported 00:09:31.802 Write Zeroes Command: Supported 00:09:31.802 Set Features Save Field: Supported 00:09:31.802 Reservations: Not Supported 00:09:31.802 Timestamp: Supported 00:09:31.802 Copy: Supported 00:09:31.802 Volatile Write Cache: Present 00:09:31.802 Atomic Write Unit (Normal): 1 00:09:31.802 Atomic Write Unit (PFail): 1 00:09:31.802 Atomic Compare & Write Unit: 1 00:09:31.802 Fused Compare & Write: Not Supported 00:09:31.802 Scatter-Gather List 00:09:31.802 SGL Command Set: Supported 00:09:31.802 SGL Keyed: Not Supported 00:09:31.802 SGL Bit Bucket Descriptor: Not Supported 00:09:31.802 SGL Metadata Pointer: Not Supported 00:09:31.802 Oversized SGL: Not Supported 00:09:31.802 SGL Metadata Address: Not Supported 00:09:31.802 SGL Offset: Not Supported 00:09:31.802 Transport SGL Data Block: Not Supported 00:09:31.802 Replay Protected Memory Block: Not Supported 00:09:31.802 00:09:31.802 Firmware Slot Information 00:09:31.802 ========================= 00:09:31.802 Active slot: 1 00:09:31.802 Slot 1 Firmware Revision: 1.0 00:09:31.802 00:09:31.802 00:09:31.802 Commands Supported and Effects 00:09:31.802 ============================== 00:09:31.802 Admin Commands 00:09:31.802 -------------- 00:09:31.802 Delete I/O Submission Queue (00h): Supported 00:09:31.802 Create I/O Submission Queue (01h): Supported 00:09:31.802 Get Log Page (02h): Supported 00:09:31.802 Delete I/O Completion Queue (04h): Supported 00:09:31.802 Create I/O Completion Queue (05h): Supported 00:09:31.802 Identify (06h): Supported 00:09:31.802 Abort (08h): Supported 00:09:31.802 Set Features (09h): Supported 00:09:31.802 Get Features (0Ah): Supported 00:09:31.802 Asynchronous Event Request (0Ch): Supported 00:09:31.802 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:31.802 Directive Send (19h): Supported 00:09:31.802 Directive Receive (1Ah): Supported 00:09:31.802 Virtualization Management (1Ch): Supported 00:09:31.802 Doorbell Buffer Config (7Ch): Supported 00:09:31.802 Format NVM (80h): Supported LBA-Change 00:09:31.802 I/O Commands 00:09:31.802 ------------ 00:09:31.802 Flush (00h): Supported LBA-Change 00:09:31.802 Write (01h): Supported LBA-Change 00:09:31.802 Read (02h): Supported 00:09:31.802 Compare (05h): Supported 00:09:31.802 Write Zeroes (08h): Supported LBA-Change 00:09:31.802 Dataset Management (09h): Supported LBA-Change 00:09:31.802 Unknown (0Ch): Supported 00:09:31.802 Unknown (12h): Supported 00:09:31.802 Copy (19h): Supported LBA-Change 00:09:31.802 Unknown (1Dh): Supported LBA-Change 00:09:31.802 00:09:31.802 Error Log 00:09:31.802 ========= 00:09:31.802 00:09:31.802 Arbitration 00:09:31.802 =========== 00:09:31.802 Arbitration Burst: no limit 00:09:31.802 00:09:31.802 Power Management 00:09:31.802 ================ 00:09:31.802 Number of Power States: 1 00:09:31.802 Current Power State: Power State #0 00:09:31.802 Power State #0: 00:09:31.802 Max Power: 25.00 W 00:09:31.802 Non-Operational State: Operational 00:09:31.802 Entry Latency: 16 microseconds 00:09:31.802 Exit Latency: 4 microseconds 00:09:31.802 Relative Read Throughput: 0 00:09:31.802 Relative Read Latency: 0 00:09:31.802 Relative Write Throughput: 0 00:09:31.802 Relative Write Latency: 0 00:09:31.802 Idle Power: Not Reported 00:09:31.802 Active Power: Not Reported 00:09:31.802 Non-Operational Permissive Mode: Not Supported 00:09:31.802 00:09:31.802 Health Information 00:09:31.802 ================== 00:09:31.802 Critical Warnings: 00:09:31.802 Available Spare Space: OK 00:09:31.802 Temperature: OK 00:09:31.802 Device Reliability: OK 00:09:31.802 Read Only: No 00:09:31.802 Volatile Memory Backup: OK 00:09:31.802 Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.802 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:31.802 Available Spare: 0% 00:09:31.802 Available Spare Threshold: 0% 00:09:31.802 Life Percentage Used: 0% 00:09:31.802 Data Units Read: 1197 00:09:31.802 Data Units Written: 550 00:09:31.802 Host Read Commands: 56187 00:09:31.802 Host Write Commands: 27581 00:09:31.802 Controller Busy Time: 0 minutes 00:09:31.802 Power Cycles: 0 00:09:31.802 Power On Hours: 0 hours 00:09:31.802 Unsafe Shutdowns: 0 00:09:31.802 Unrecoverable Media Errors: 0 00:09:31.803 Lifetime Error Log Entries: 0 00:09:31.803 Warning Temperature Time: 0 minutes 00:09:31.803 Critical Temperature Time: 0 minutes 00:09:31.803 00:09:31.803 Number of Queues 00:09:31.803 ================ 00:09:31.803 Number of I/O Submission Queues: 64 00:09:31.803 Number of I/O Completion Queues: 64 00:09:31.803 00:09:31.803 ZNS Specific Controller Data 00:09:31.803 ============================ 00:09:31.803 Zone Append Size Limit: 0 00:09:31.803 00:09:31.803 00:09:31.803 Active Namespaces 00:09:31.803 ================= 00:09:31.803 Namespace ID:1 00:09:31.803 Error Recovery Timeout: Unlimited 00:09:31.803 Command Set Identifier: NVM (00h) 00:09:31.803 Deallocate: Supported 00:09:31.803 Deallocated/Unwritten Error: Supported 00:09:31.803 Deallocated Read Value: All 0x00 00:09:31.803 Deallocate in Write Zeroes: Not Supported 00:09:31.803 Deallocated Guard Field: 0xFFFF 00:09:31.803 Flush: Supported 00:09:31.803 Reservation: Not Supported 00:09:31.803 Namespace Sharing Capabilities: Private 00:09:31.803 Size (in LBAs): 1310720 (5GiB) 00:09:31.803 Capacity (in LBAs): 1310720 (5GiB) 00:09:31.803 Utilization (in LBAs): 1310720 (5GiB) 00:09:31.803 Thin Provisioning: Not Supported 00:09:31.803 Per-NS Atomic Units: No 00:09:31.803 Maximum Single Source Range Length: 128 00:09:31.803 Maximum Copy Length: 128 00:09:31.803 Maximum Source Range Count: 128 00:09:31.803 NGUID/EUI64 Never Reused: No 00:09:31.803 Namespace Write Protected: No 00:09:31.803 Number of LBA Formats: 8 00:09:31.803 Current LBA Format: LBA Format #04 00:09:31.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:31.803 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:31.803 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:31.803 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:31.803 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:31.803 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:31.803 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:31.803 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:31.803 00:09:31.803 09:47:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:31.803 09:47:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:32.064 ===================================================== 00:09:32.064 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:32.064 ===================================================== 00:09:32.064 Controller Capabilities/Features 00:09:32.064 ================================ 00:09:32.064 Vendor ID: 1b36 00:09:32.065 Subsystem Vendor ID: 1af4 00:09:32.065 Serial Number: 12342 00:09:32.065 Model Number: QEMU NVMe Ctrl 00:09:32.065 Firmware Version: 8.0.0 00:09:32.065 Recommended Arb Burst: 6 00:09:32.065 IEEE OUI Identifier: 00 54 52 00:09:32.065 Multi-path I/O 00:09:32.065 May have multiple subsystem ports: No 00:09:32.065 May have multiple controllers: No 00:09:32.065 Associated with SR-IOV VF: No 00:09:32.065 Max Data Transfer Size: 524288 00:09:32.065 Max Number of Namespaces: 256 00:09:32.065 Max Number of I/O Queues: 64 00:09:32.065 NVMe Specification Version (VS): 1.4 00:09:32.065 NVMe Specification Version (Identify): 1.4 00:09:32.065 Maximum Queue Entries: 2048 00:09:32.065 Contiguous Queues Required: Yes 00:09:32.065 Arbitration Mechanisms Supported 00:09:32.065 Weighted Round Robin: Not Supported 00:09:32.065 Vendor Specific: Not Supported 00:09:32.065 Reset Timeout: 7500 ms 00:09:32.065 Doorbell Stride: 4 bytes 00:09:32.065 NVM Subsystem Reset: Not Supported 00:09:32.065 Command Sets Supported 00:09:32.065 NVM Command Set: Supported 00:09:32.065 Boot Partition: Not Supported 00:09:32.065 Memory Page Size Minimum: 4096 bytes 00:09:32.065 Memory Page Size Maximum: 65536 bytes 00:09:32.065 Persistent Memory Region: Not Supported 00:09:32.065 Optional Asynchronous Events Supported 00:09:32.065 Namespace Attribute Notices: Supported 00:09:32.065 Firmware Activation Notices: Not Supported 00:09:32.065 ANA Change Notices: Not Supported 00:09:32.065 PLE Aggregate Log Change Notices: Not Supported 00:09:32.065 LBA Status Info Alert Notices: Not Supported 00:09:32.065 EGE Aggregate Log Change Notices: Not Supported 00:09:32.065 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.065 Zone Descriptor Change Notices: Not Supported 00:09:32.065 Discovery Log Change Notices: Not Supported 00:09:32.065 Controller Attributes 00:09:32.065 128-bit Host Identifier: Not Supported 00:09:32.065 Non-Operational Permissive Mode: Not Supported 00:09:32.065 NVM Sets: Not Supported 00:09:32.065 Read Recovery Levels: Not Supported 00:09:32.065 Endurance Groups: Not Supported 00:09:32.065 Predictable Latency Mode: Not Supported 00:09:32.065 Traffic Based Keep ALive: Not Supported 00:09:32.065 Namespace Granularity: Not Supported 00:09:32.065 SQ Associations: Not Supported 00:09:32.065 UUID List: Not Supported 00:09:32.065 Multi-Domain Subsystem: Not Supported 00:09:32.065 Fixed Capacity Management: Not Supported 00:09:32.065 Variable Capacity Management: Not Supported 00:09:32.065 Delete Endurance Group: Not Supported 00:09:32.065 Delete NVM Set: Not Supported 00:09:32.065 Extended LBA Formats Supported: Supported 00:09:32.065 Flexible Data Placement Supported: Not Supported 00:09:32.065 00:09:32.065 Controller Memory Buffer Support 00:09:32.065 ================================ 00:09:32.065 Supported: No 00:09:32.065 00:09:32.065 Persistent Memory Region Support 00:09:32.065 ================================ 00:09:32.065 Supported: No 00:09:32.065 00:09:32.065 Admin Command Set Attributes 00:09:32.065 ============================ 00:09:32.065 Security Send/Receive: Not Supported 00:09:32.065 Format NVM: Supported 00:09:32.065 Firmware Activate/Download: Not Supported 00:09:32.065 Namespace Management: Supported 00:09:32.065 Device Self-Test: Not Supported 00:09:32.065 Directives: Supported 00:09:32.065 NVMe-MI: Not Supported 00:09:32.065 Virtualization Management: Not Supported 00:09:32.065 Doorbell Buffer Config: Supported 00:09:32.065 Get LBA Status Capability: Not Supported 00:09:32.065 Command & Feature Lockdown Capability: Not Supported 00:09:32.065 Abort Command Limit: 4 00:09:32.065 Async Event Request Limit: 4 00:09:32.065 Number of Firmware Slots: N/A 00:09:32.065 Firmware Slot 1 Read-Only: N/A 00:09:32.065 Firmware Activation Without Reset: N/A 00:09:32.065 Multiple Update Detection Support: N/A 00:09:32.065 Firmware Update Granularity: No Information Provided 00:09:32.065 Per-Namespace SMART Log: Yes 00:09:32.065 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.065 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:32.065 Command Effects Log Page: Supported 00:09:32.065 Get Log Page Extended Data: Supported 00:09:32.065 Telemetry Log Pages: Not Supported 00:09:32.065 Persistent Event Log Pages: Not Supported 00:09:32.065 Supported Log Pages Log Page: May Support 00:09:32.065 Commands Supported & Effects Log Page: Not Supported 00:09:32.065 Feature Identifiers & Effects Log Page:May Support 00:09:32.065 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.065 Data Area 4 for Telemetry Log: Not Supported 00:09:32.065 Error Log Page Entries Supported: 1 00:09:32.065 Keep Alive: Not Supported 00:09:32.065 00:09:32.065 NVM Command Set Attributes 00:09:32.065 ========================== 00:09:32.065 Submission Queue Entry Size 00:09:32.065 Max: 64 00:09:32.065 Min: 64 00:09:32.065 Completion Queue Entry Size 00:09:32.065 Max: 16 00:09:32.065 Min: 16 00:09:32.065 Number of Namespaces: 256 00:09:32.065 Compare Command: Supported 00:09:32.065 Write Uncorrectable Command: Not Supported 00:09:32.065 Dataset Management Command: Supported 00:09:32.065 Write Zeroes Command: Supported 00:09:32.065 Set Features Save Field: Supported 00:09:32.065 Reservations: Not Supported 00:09:32.065 Timestamp: Supported 00:09:32.065 Copy: Supported 00:09:32.065 Volatile Write Cache: Present 00:09:32.065 Atomic Write Unit (Normal): 1 00:09:32.065 Atomic Write Unit (PFail): 1 00:09:32.065 Atomic Compare & Write Unit: 1 00:09:32.065 Fused Compare & Write: Not Supported 00:09:32.065 Scatter-Gather List 00:09:32.065 SGL Command Set: Supported 00:09:32.065 SGL Keyed: Not Supported 00:09:32.065 SGL Bit Bucket Descriptor: Not Supported 00:09:32.065 SGL Metadata Pointer: Not Supported 00:09:32.065 Oversized SGL: Not Supported 00:09:32.065 SGL Metadata Address: Not Supported 00:09:32.065 SGL Offset: Not Supported 00:09:32.065 Transport SGL Data Block: Not Supported 00:09:32.065 Replay Protected Memory Block: Not Supported 00:09:32.065 00:09:32.065 Firmware Slot Information 00:09:32.065 ========================= 00:09:32.065 Active slot: 1 00:09:32.065 Slot 1 Firmware Revision: 1.0 00:09:32.065 00:09:32.065 00:09:32.065 Commands Supported and Effects 00:09:32.065 ============================== 00:09:32.065 Admin Commands 00:09:32.065 -------------- 00:09:32.065 Delete I/O Submission Queue (00h): Supported 00:09:32.065 Create I/O Submission Queue (01h): Supported 00:09:32.065 Get Log Page (02h): Supported 00:09:32.065 Delete I/O Completion Queue (04h): Supported 00:09:32.065 Create I/O Completion Queue (05h): Supported 00:09:32.065 Identify (06h): Supported 00:09:32.065 Abort (08h): Supported 00:09:32.065 Set Features (09h): Supported 00:09:32.065 Get Features (0Ah): Supported 00:09:32.065 Asynchronous Event Request (0Ch): Supported 00:09:32.065 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.065 Directive Send (19h): Supported 00:09:32.065 Directive Receive (1Ah): Supported 00:09:32.065 Virtualization Management (1Ch): Supported 00:09:32.065 Doorbell Buffer Config (7Ch): Supported 00:09:32.065 Format NVM (80h): Supported LBA-Change 00:09:32.065 I/O Commands 00:09:32.065 ------------ 00:09:32.065 Flush (00h): Supported LBA-Change 00:09:32.065 Write (01h): Supported LBA-Change 00:09:32.065 Read (02h): Supported 00:09:32.065 Compare (05h): Supported 00:09:32.065 Write Zeroes (08h): Supported LBA-Change 00:09:32.065 Dataset Management (09h): Supported LBA-Change 00:09:32.065 Unknown (0Ch): Supported 00:09:32.065 Unknown (12h): Supported 00:09:32.065 Copy (19h): Supported LBA-Change 00:09:32.065 Unknown (1Dh): Supported LBA-Change 00:09:32.065 00:09:32.065 Error Log 00:09:32.065 ========= 00:09:32.065 00:09:32.065 Arbitration 00:09:32.065 =========== 00:09:32.065 Arbitration Burst: no limit 00:09:32.065 00:09:32.065 Power Management 00:09:32.065 ================ 00:09:32.065 Number of Power States: 1 00:09:32.065 Current Power State: Power State #0 00:09:32.065 Power State #0: 00:09:32.065 Max Power: 25.00 W 00:09:32.065 Non-Operational State: Operational 00:09:32.065 Entry Latency: 16 microseconds 00:09:32.065 Exit Latency: 4 microseconds 00:09:32.065 Relative Read Throughput: 0 00:09:32.065 Relative Read Latency: 0 00:09:32.065 Relative Write Throughput: 0 00:09:32.065 Relative Write Latency: 0 00:09:32.065 Idle Power: Not Reported 00:09:32.065 Active Power: Not Reported 00:09:32.065 Non-Operational Permissive Mode: Not Supported 00:09:32.065 00:09:32.065 Health Information 00:09:32.065 ================== 00:09:32.065 Critical Warnings: 00:09:32.065 Available Spare Space: OK 00:09:32.065 Temperature: OK 00:09:32.065 Device Reliability: OK 00:09:32.066 Read Only: No 00:09:32.066 Volatile Memory Backup: OK 00:09:32.066 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.066 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.066 Available Spare: 0% 00:09:32.066 Available Spare Threshold: 0% 00:09:32.066 Life Percentage Used: 0% 00:09:32.066 Data Units Read: 3756 00:09:32.066 Data Units Written: 1725 00:09:32.066 Host Read Commands: 170815 00:09:32.066 Host Write Commands: 83657 00:09:32.066 Controller Busy Time: 0 minutes 00:09:32.066 Power Cycles: 0 00:09:32.066 Power On Hours: 0 hours 00:09:32.066 Unsafe Shutdowns: 0 00:09:32.066 Unrecoverable Media Errors: 0 00:09:32.066 Lifetime Error Log Entries: 0 00:09:32.066 Warning Temperature Time: 0 minutes 00:09:32.066 Critical Temperature Time: 0 minutes 00:09:32.066 00:09:32.066 Number of Queues 00:09:32.066 ================ 00:09:32.066 Number of I/O Submission Queues: 64 00:09:32.066 Number of I/O Completion Queues: 64 00:09:32.066 00:09:32.066 ZNS Specific Controller Data 00:09:32.066 ============================ 00:09:32.066 Zone Append Size Limit: 0 00:09:32.066 00:09:32.066 00:09:32.066 Active Namespaces 00:09:32.066 ================= 00:09:32.066 Namespace ID:1 00:09:32.066 Error Recovery Timeout: Unlimited 00:09:32.066 Command Set Identifier: NVM (00h) 00:09:32.066 Deallocate: Supported 00:09:32.066 Deallocated/Unwritten Error: Supported 00:09:32.066 Deallocated Read Value: All 0x00 00:09:32.066 Deallocate in Write Zeroes: Not Supported 00:09:32.066 Deallocated Guard Field: 0xFFFF 00:09:32.066 Flush: Supported 00:09:32.066 Reservation: Not Supported 00:09:32.066 Namespace Sharing Capabilities: Private 00:09:32.066 Size (in LBAs): 1048576 (4GiB) 00:09:32.066 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.066 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.066 Thin Provisioning: Not Supported 00:09:32.066 Per-NS Atomic Units: No 00:09:32.066 Maximum Single Source Range Length: 128 00:09:32.066 Maximum Copy Length: 128 00:09:32.066 Maximum Source Range Count: 128 00:09:32.066 NGUID/EUI64 Never Reused: No 00:09:32.066 Namespace Write Protected: No 00:09:32.066 Number of LBA Formats: 8 00:09:32.066 Current LBA Format: LBA Format #04 00:09:32.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.066 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.066 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.066 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.066 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.066 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.066 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.066 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.066 00:09:32.066 Namespace ID:2 00:09:32.066 Error Recovery Timeout: Unlimited 00:09:32.066 Command Set Identifier: NVM (00h) 00:09:32.066 Deallocate: Supported 00:09:32.066 Deallocated/Unwritten Error: Supported 00:09:32.066 Deallocated Read Value: All 0x00 00:09:32.066 Deallocate in Write Zeroes: Not Supported 00:09:32.066 Deallocated Guard Field: 0xFFFF 00:09:32.066 Flush: Supported 00:09:32.066 Reservation: Not Supported 00:09:32.066 Namespace Sharing Capabilities: Private 00:09:32.066 Size (in LBAs): 1048576 (4GiB) 00:09:32.066 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.066 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.066 Thin Provisioning: Not Supported 00:09:32.066 Per-NS Atomic Units: No 00:09:32.066 Maximum Single Source Range Length: 128 00:09:32.066 Maximum Copy Length: 128 00:09:32.066 Maximum Source Range Count: 128 00:09:32.066 NGUID/EUI64 Never Reused: No 00:09:32.066 Namespace Write Protected: No 00:09:32.066 Number of LBA Formats: 8 00:09:32.066 Current LBA Format: LBA Format #04 00:09:32.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.066 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.066 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.066 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.066 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.066 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.066 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.066 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.066 00:09:32.066 Namespace ID:3 00:09:32.066 Error Recovery Timeout: Unlimited 00:09:32.066 Command Set Identifier: NVM (00h) 00:09:32.066 Deallocate: Supported 00:09:32.066 Deallocated/Unwritten Error: Supported 00:09:32.066 Deallocated Read Value: All 0x00 00:09:32.066 Deallocate in Write Zeroes: Not Supported 00:09:32.066 Deallocated Guard Field: 0xFFFF 00:09:32.066 Flush: Supported 00:09:32.066 Reservation: Not Supported 00:09:32.066 Namespace Sharing Capabilities: Private 00:09:32.066 Size (in LBAs): 1048576 (4GiB) 00:09:32.066 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.066 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.066 Thin Provisioning: Not Supported 00:09:32.066 Per-NS Atomic Units: No 00:09:32.066 Maximum Single Source Range Length: 128 00:09:32.066 Maximum Copy Length: 128 00:09:32.066 Maximum Source Range Count: 128 00:09:32.066 NGUID/EUI64 Never Reused: No 00:09:32.066 Namespace Write Protected: No 00:09:32.066 Number of LBA Formats: 8 00:09:32.066 Current LBA Format: LBA Format #04 00:09:32.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.066 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.066 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.066 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.066 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.066 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.066 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.066 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.066 00:09:32.066 09:47:20 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:32.066 09:47:20 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:32.325 ===================================================== 00:09:32.325 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:32.325 ===================================================== 00:09:32.325 Controller Capabilities/Features 00:09:32.325 ================================ 00:09:32.325 Vendor ID: 1b36 00:09:32.325 Subsystem Vendor ID: 1af4 00:09:32.325 Serial Number: 12343 00:09:32.325 Model Number: QEMU NVMe Ctrl 00:09:32.325 Firmware Version: 8.0.0 00:09:32.325 Recommended Arb Burst: 6 00:09:32.325 IEEE OUI Identifier: 00 54 52 00:09:32.325 Multi-path I/O 00:09:32.325 May have multiple subsystem ports: No 00:09:32.325 May have multiple controllers: Yes 00:09:32.325 Associated with SR-IOV VF: No 00:09:32.325 Max Data Transfer Size: 524288 00:09:32.325 Max Number of Namespaces: 256 00:09:32.325 Max Number of I/O Queues: 64 00:09:32.325 NVMe Specification Version (VS): 1.4 00:09:32.325 NVMe Specification Version (Identify): 1.4 00:09:32.325 Maximum Queue Entries: 2048 00:09:32.325 Contiguous Queues Required: Yes 00:09:32.325 Arbitration Mechanisms Supported 00:09:32.325 Weighted Round Robin: Not Supported 00:09:32.325 Vendor Specific: Not Supported 00:09:32.325 Reset Timeout: 7500 ms 00:09:32.325 Doorbell Stride: 4 bytes 00:09:32.325 NVM Subsystem Reset: Not Supported 00:09:32.325 Command Sets Supported 00:09:32.325 NVM Command Set: Supported 00:09:32.325 Boot Partition: Not Supported 00:09:32.325 Memory Page Size Minimum: 4096 bytes 00:09:32.325 Memory Page Size Maximum: 65536 bytes 00:09:32.325 Persistent Memory Region: Not Supported 00:09:32.325 Optional Asynchronous Events Supported 00:09:32.325 Namespace Attribute Notices: Supported 00:09:32.325 Firmware Activation Notices: Not Supported 00:09:32.325 ANA Change Notices: Not Supported 00:09:32.325 PLE Aggregate Log Change Notices: Not Supported 00:09:32.325 LBA Status Info Alert Notices: Not Supported 00:09:32.325 EGE Aggregate Log Change Notices: Not Supported 00:09:32.325 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.325 Zone Descriptor Change Notices: Not Supported 00:09:32.326 Discovery Log Change Notices: Not Supported 00:09:32.326 Controller Attributes 00:09:32.326 128-bit Host Identifier: Not Supported 00:09:32.326 Non-Operational Permissive Mode: Not Supported 00:09:32.326 NVM Sets: Not Supported 00:09:32.326 Read Recovery Levels: Not Supported 00:09:32.326 Endurance Groups: Supported 00:09:32.326 Predictable Latency Mode: Not Supported 00:09:32.326 Traffic Based Keep ALive: Not Supported 00:09:32.326 Namespace Granularity: Not Supported 00:09:32.326 SQ Associations: Not Supported 00:09:32.326 UUID List: Not Supported 00:09:32.326 Multi-Domain Subsystem: Not Supported 00:09:32.326 Fixed Capacity Management: Not Supported 00:09:32.326 Variable Capacity Management: Not Supported 00:09:32.326 Delete Endurance Group: Not Supported 00:09:32.326 Delete NVM Set: Not Supported 00:09:32.326 Extended LBA Formats Supported: Supported 00:09:32.326 Flexible Data Placement Supported: Supported 00:09:32.326 00:09:32.326 Controller Memory Buffer Support 00:09:32.326 ================================ 00:09:32.326 Supported: No 00:09:32.326 00:09:32.326 Persistent Memory Region Support 00:09:32.326 ================================ 00:09:32.326 Supported: No 00:09:32.326 00:09:32.326 Admin Command Set Attributes 00:09:32.326 ============================ 00:09:32.326 Security Send/Receive: Not Supported 00:09:32.326 Format NVM: Supported 00:09:32.326 Firmware Activate/Download: Not Supported 00:09:32.326 Namespace Management: Supported 00:09:32.326 Device Self-Test: Not Supported 00:09:32.326 Directives: Supported 00:09:32.326 NVMe-MI: Not Supported 00:09:32.326 Virtualization Management: Not Supported 00:09:32.326 Doorbell Buffer Config: Supported 00:09:32.326 Get LBA Status Capability: Not Supported 00:09:32.326 Command & Feature Lockdown Capability: Not Supported 00:09:32.326 Abort Command Limit: 4 00:09:32.326 Async Event Request Limit: 4 00:09:32.326 Number of Firmware Slots: N/A 00:09:32.326 Firmware Slot 1 Read-Only: N/A 00:09:32.326 Firmware Activation Without Reset: N/A 00:09:32.326 Multiple Update Detection Support: N/A 00:09:32.326 Firmware Update Granularity: No Information Provided 00:09:32.326 Per-Namespace SMART Log: Yes 00:09:32.326 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.326 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.326 Command Effects Log Page: Supported 00:09:32.326 Get Log Page Extended Data: Supported 00:09:32.326 Telemetry Log Pages: Not Supported 00:09:32.326 Persistent Event Log Pages: Not Supported 00:09:32.326 Supported Log Pages Log Page: May Support 00:09:32.326 Commands Supported & Effects Log Page: Not Supported 00:09:32.326 Feature Identifiers & Effects Log Page:May Support 00:09:32.326 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.326 Data Area 4 for Telemetry Log: Not Supported 00:09:32.326 Error Log Page Entries Supported: 1 00:09:32.326 Keep Alive: Not Supported 00:09:32.326 00:09:32.326 NVM Command Set Attributes 00:09:32.326 ========================== 00:09:32.326 Submission Queue Entry Size 00:09:32.326 Max: 64 00:09:32.326 Min: 64 00:09:32.326 Completion Queue Entry Size 00:09:32.326 Max: 16 00:09:32.326 Min: 16 00:09:32.326 Number of Namespaces: 256 00:09:32.326 Compare Command: Supported 00:09:32.326 Write Uncorrectable Command: Not Supported 00:09:32.326 Dataset Management Command: Supported 00:09:32.326 Write Zeroes Command: Supported 00:09:32.326 Set Features Save Field: Supported 00:09:32.326 Reservations: Not Supported 00:09:32.326 Timestamp: Supported 00:09:32.326 Copy: Supported 00:09:32.326 Volatile Write Cache: Present 00:09:32.326 Atomic Write Unit (Normal): 1 00:09:32.326 Atomic Write Unit (PFail): 1 00:09:32.326 Atomic Compare & Write Unit: 1 00:09:32.326 Fused Compare & Write: Not Supported 00:09:32.326 Scatter-Gather List 00:09:32.326 SGL Command Set: Supported 00:09:32.326 SGL Keyed: Not Supported 00:09:32.326 SGL Bit Bucket Descriptor: Not Supported 00:09:32.326 SGL Metadata Pointer: Not Supported 00:09:32.326 Oversized SGL: Not Supported 00:09:32.326 SGL Metadata Address: Not Supported 00:09:32.326 SGL Offset: Not Supported 00:09:32.326 Transport SGL Data Block: Not Supported 00:09:32.326 Replay Protected Memory Block: Not Supported 00:09:32.326 00:09:32.326 Firmware Slot Information 00:09:32.326 ========================= 00:09:32.326 Active slot: 1 00:09:32.326 Slot 1 Firmware Revision: 1.0 00:09:32.326 00:09:32.326 00:09:32.326 Commands Supported and Effects 00:09:32.326 ============================== 00:09:32.326 Admin Commands 00:09:32.326 -------------- 00:09:32.326 Delete I/O Submission Queue (00h): Supported 00:09:32.326 Create I/O Submission Queue (01h): Supported 00:09:32.326 Get Log Page (02h): Supported 00:09:32.326 Delete I/O Completion Queue (04h): Supported 00:09:32.326 Create I/O Completion Queue (05h): Supported 00:09:32.326 Identify (06h): Supported 00:09:32.326 Abort (08h): Supported 00:09:32.326 Set Features (09h): Supported 00:09:32.326 Get Features (0Ah): Supported 00:09:32.326 Asynchronous Event Request (0Ch): Supported 00:09:32.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.326 Directive Send (19h): Supported 00:09:32.326 Directive Receive (1Ah): Supported 00:09:32.326 Virtualization Management (1Ch): Supported 00:09:32.326 Doorbell Buffer Config (7Ch): Supported 00:09:32.326 Format NVM (80h): Supported LBA-Change 00:09:32.326 I/O Commands 00:09:32.326 ------------ 00:09:32.326 Flush (00h): Supported LBA-Change 00:09:32.326 Write (01h): Supported LBA-Change 00:09:32.326 Read (02h): Supported 00:09:32.326 Compare (05h): Supported 00:09:32.326 Write Zeroes (08h): Supported LBA-Change 00:09:32.326 Dataset Management (09h): Supported LBA-Change 00:09:32.326 Unknown (0Ch): Supported 00:09:32.326 Unknown (12h): Supported 00:09:32.326 Copy (19h): Supported LBA-Change 00:09:32.326 Unknown (1Dh): Supported LBA-Change 00:09:32.326 00:09:32.326 Error Log 00:09:32.326 ========= 00:09:32.326 00:09:32.326 Arbitration 00:09:32.326 =========== 00:09:32.326 Arbitration Burst: no limit 00:09:32.326 00:09:32.326 Power Management 00:09:32.326 ================ 00:09:32.326 Number of Power States: 1 00:09:32.326 Current Power State: Power State #0 00:09:32.326 Power State #0: 00:09:32.326 Max Power: 25.00 W 00:09:32.326 Non-Operational State: Operational 00:09:32.326 Entry Latency: 16 microseconds 00:09:32.326 Exit Latency: 4 microseconds 00:09:32.326 Relative Read Throughput: 0 00:09:32.326 Relative Read Latency: 0 00:09:32.326 Relative Write Throughput: 0 00:09:32.326 Relative Write Latency: 0 00:09:32.326 Idle Power: Not Reported 00:09:32.326 Active Power: Not Reported 00:09:32.326 Non-Operational Permissive Mode: Not Supported 00:09:32.326 00:09:32.326 Health Information 00:09:32.326 ================== 00:09:32.326 Critical Warnings: 00:09:32.326 Available Spare Space: OK 00:09:32.326 Temperature: OK 00:09:32.326 Device Reliability: OK 00:09:32.326 Read Only: No 00:09:32.326 Volatile Memory Backup: OK 00:09:32.326 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.326 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.326 Available Spare: 0% 00:09:32.326 Available Spare Threshold: 0% 00:09:32.326 Life Percentage Used: 0% 00:09:32.326 Data Units Read: 1406 00:09:32.326 Data Units Written: 652 00:09:32.326 Host Read Commands: 58142 00:09:32.326 Host Write Commands: 28527 00:09:32.326 Controller Busy Time: 0 minutes 00:09:32.326 Power Cycles: 0 00:09:32.326 Power On Hours: 0 hours 00:09:32.326 Unsafe Shutdowns: 0 00:09:32.326 Unrecoverable Media Errors: 0 00:09:32.326 Lifetime Error Log Entries: 0 00:09:32.326 Warning Temperature Time: 0 minutes 00:09:32.326 Critical Temperature Time: 0 minutes 00:09:32.326 00:09:32.326 Number of Queues 00:09:32.326 ================ 00:09:32.326 Number of I/O Submission Queues: 64 00:09:32.326 Number of I/O Completion Queues: 64 00:09:32.326 00:09:32.326 ZNS Specific Controller Data 00:09:32.326 ============================ 00:09:32.326 Zone Append Size Limit: 0 00:09:32.326 00:09:32.326 00:09:32.326 Active Namespaces 00:09:32.326 ================= 00:09:32.326 Namespace ID:1 00:09:32.326 Error Recovery Timeout: Unlimited 00:09:32.326 Command Set Identifier: NVM (00h) 00:09:32.326 Deallocate: Supported 00:09:32.326 Deallocated/Unwritten Error: Supported 00:09:32.326 Deallocated Read Value: All 0x00 00:09:32.326 Deallocate in Write Zeroes: Not Supported 00:09:32.326 Deallocated Guard Field: 0xFFFF 00:09:32.326 Flush: Supported 00:09:32.326 Reservation: Not Supported 00:09:32.326 Namespace Sharing Capabilities: Multiple Controllers 00:09:32.326 Size (in LBAs): 262144 (1GiB) 00:09:32.326 Capacity (in LBAs): 262144 (1GiB) 00:09:32.326 Utilization (in LBAs): 262144 (1GiB) 00:09:32.326 Thin Provisioning: Not Supported 00:09:32.326 Per-NS Atomic Units: No 00:09:32.326 Maximum Single Source Range Length: 128 00:09:32.326 Maximum Copy Length: 128 00:09:32.326 Maximum Source Range Count: 128 00:09:32.326 NGUID/EUI64 Never Reused: No 00:09:32.326 Namespace Write Protected: No 00:09:32.326 Endurance group ID: 1 00:09:32.326 Number of LBA Formats: 8 00:09:32.326 Current LBA Format: LBA Format #04 00:09:32.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.326 00:09:32.326 Get Feature FDP: 00:09:32.326 ================ 00:09:32.326 Enabled: Yes 00:09:32.326 FDP configuration index: 0 00:09:32.326 00:09:32.326 FDP configurations log page 00:09:32.326 =========================== 00:09:32.326 Number of FDP configurations: 1 00:09:32.326 Version: 0 00:09:32.326 Size: 112 00:09:32.326 FDP Configuration Descriptor: 0 00:09:32.326 Descriptor Size: 96 00:09:32.326 Reclaim Group Identifier format: 2 00:09:32.326 FDP Volatile Write Cache: Not Present 00:09:32.326 FDP Configuration: Valid 00:09:32.326 Vendor Specific Size: 0 00:09:32.326 Number of Reclaim Groups: 2 00:09:32.326 Number of Recalim Unit Handles: 8 00:09:32.326 Max Placement Identifiers: 128 00:09:32.326 Number of Namespaces Suppprted: 256 00:09:32.326 Reclaim unit Nominal Size: 6000000 bytes 00:09:32.326 Estimated Reclaim Unit Time Limit: Not Reported 00:09:32.326 RUH Desc #000: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #001: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #002: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #003: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #004: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #005: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #006: RUH Type: Initially Isolated 00:09:32.326 RUH Desc #007: RUH Type: Initially Isolated 00:09:32.326 00:09:32.326 FDP reclaim unit handle usage log page 00:09:32.326 ====================================== 00:09:32.326 Number of Reclaim Unit Handles: 8 00:09:32.326 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:32.327 RUH Usage Desc #001: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #002: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #003: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #004: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #005: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #006: RUH Attributes: Unused 00:09:32.327 RUH Usage Desc #007: RUH Attributes: Unused 00:09:32.327 00:09:32.327 FDP statistics log page 00:09:32.327 ======================= 00:09:32.327 Host bytes with metadata written: 421486592 00:09:32.327 Media bytes with metadata written: 421564416 00:09:32.327 Media bytes erased: 0 00:09:32.327 00:09:32.327 FDP events log page 00:09:32.327 =================== 00:09:32.327 Number of FDP events: 0 00:09:32.327 00:09:32.327 00:09:32.327 real 0m1.118s 00:09:32.327 user 0m0.356s 00:09:32.327 sys 0m0.532s 00:09:32.327 09:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.327 ************************************ 00:09:32.327 09:47:21 -- common/autotest_common.sh@10 -- # set +x 00:09:32.327 END TEST nvme_identify 00:09:32.327 ************************************ 00:09:32.327 09:47:21 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:32.327 09:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.327 09:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.327 09:47:21 -- common/autotest_common.sh@10 -- # set +x 00:09:32.327 ************************************ 00:09:32.327 START TEST nvme_perf 00:09:32.327 ************************************ 00:09:32.327 09:47:21 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:32.327 09:47:21 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:33.700 Initializing NVMe Controllers 00:09:33.700 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:33.700 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:33.700 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:33.700 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:33.700 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:33.700 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:33.700 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:33.700 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:33.700 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:33.700 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:33.700 Initialization complete. Launching workers. 00:09:33.700 ======================================================== 00:09:33.700 Latency(us) 00:09:33.700 Device Information : IOPS MiB/s Average min max 00:09:33.700 PCIE (0000:00:09.0) NSID 1 from core 0: 20441.30 239.55 6261.86 5050.94 30273.33 00:09:33.700 PCIE (0000:00:06.0) NSID 1 from core 0: 20441.30 239.55 6261.19 4926.81 29487.51 00:09:33.700 PCIE (0000:00:07.0) NSID 1 from core 0: 20441.30 239.55 6261.54 5042.37 28073.88 00:09:33.700 PCIE (0000:00:08.0) NSID 1 from core 0: 20441.30 239.55 6260.84 5096.19 27633.31 00:09:33.700 PCIE (0000:00:08.0) NSID 2 from core 0: 20441.30 239.55 6259.84 5095.13 26365.22 00:09:33.700 PCIE (0000:00:08.0) NSID 3 from core 0: 20568.27 241.03 6216.87 5065.76 18243.78 00:09:33.700 ======================================================== 00:09:33.700 Total : 122774.77 1438.77 6253.65 4926.81 30273.33 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5217.674us 00:09:33.700 10.00000% : 5394.117us 00:09:33.700 25.00000% : 5671.385us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6553.600us 00:09:33.700 90.00000% : 6805.662us 00:09:33.700 95.00000% : 6906.486us 00:09:33.700 98.00000% : 7309.785us 00:09:33.700 99.00000% : 9527.926us 00:09:33.700 99.50000% : 28029.243us 00:09:33.700 99.90000% : 29844.086us 00:09:33.700 99.99000% : 30247.385us 00:09:33.700 99.99900% : 30449.034us 00:09:33.700 99.99990% : 30449.034us 00:09:33.700 99.99999% : 30449.034us 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5066.437us 00:09:33.700 10.00000% : 5293.292us 00:09:33.700 25.00000% : 5595.766us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6654.425us 00:09:33.700 90.00000% : 6956.898us 00:09:33.700 95.00000% : 7057.723us 00:09:33.700 98.00000% : 7360.197us 00:09:33.700 99.00000% : 10737.822us 00:09:33.700 99.50000% : 26819.348us 00:09:33.700 99.90000% : 29037.489us 00:09:33.700 99.99000% : 29440.788us 00:09:33.700 99.99900% : 29642.437us 00:09:33.700 99.99990% : 29642.437us 00:09:33.700 99.99999% : 29642.437us 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5192.468us 00:09:33.700 10.00000% : 5394.117us 00:09:33.700 25.00000% : 5671.385us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6553.600us 00:09:33.700 90.00000% : 6805.662us 00:09:33.700 95.00000% : 6956.898us 00:09:33.700 98.00000% : 7259.372us 00:09:33.700 99.00000% : 11998.129us 00:09:33.700 99.50000% : 25609.452us 00:09:33.700 99.90000% : 27625.945us 00:09:33.700 99.99000% : 28029.243us 00:09:33.700 99.99900% : 28230.892us 00:09:33.700 99.99990% : 28230.892us 00:09:33.700 99.99999% : 28230.892us 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5217.674us 00:09:33.700 10.00000% : 5394.117us 00:09:33.700 25.00000% : 5671.385us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6553.600us 00:09:33.700 90.00000% : 6856.074us 00:09:33.700 95.00000% : 6956.898us 00:09:33.700 98.00000% : 7461.022us 00:09:33.700 99.00000% : 11594.831us 00:09:33.700 99.50000% : 25206.154us 00:09:33.700 99.90000% : 27222.646us 00:09:33.700 99.99000% : 27625.945us 00:09:33.700 99.99900% : 27827.594us 00:09:33.700 99.99990% : 27827.594us 00:09:33.700 99.99999% : 27827.594us 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5217.674us 00:09:33.700 10.00000% : 5419.323us 00:09:33.700 25.00000% : 5671.385us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6553.600us 00:09:33.700 90.00000% : 6856.074us 00:09:33.700 95.00000% : 6956.898us 00:09:33.700 98.00000% : 7662.671us 00:09:33.700 99.00000% : 11494.006us 00:09:33.700 99.50000% : 23895.434us 00:09:33.700 99.90000% : 26012.751us 00:09:33.700 99.99000% : 26416.049us 00:09:33.700 99.99900% : 26416.049us 00:09:33.700 99.99990% : 26416.049us 00:09:33.700 99.99999% : 26416.049us 00:09:33.700 00:09:33.700 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:33.700 ================================================================================= 00:09:33.700 1.00000% : 5217.674us 00:09:33.700 10.00000% : 5419.323us 00:09:33.700 25.00000% : 5671.385us 00:09:33.700 50.00000% : 6125.095us 00:09:33.700 75.00000% : 6553.600us 00:09:33.700 90.00000% : 6856.074us 00:09:33.700 95.00000% : 6956.898us 00:09:33.700 98.00000% : 7965.145us 00:09:33.700 99.00000% : 11695.655us 00:09:33.700 99.50000% : 15829.465us 00:09:33.700 99.90000% : 17845.957us 00:09:33.700 99.99000% : 18249.255us 00:09:33.700 99.99900% : 18249.255us 00:09:33.700 99.99990% : 18249.255us 00:09:33.700 99.99999% : 18249.255us 00:09:33.700 00:09:33.700 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:33.700 ============================================================================== 00:09:33.700 Range in us Cumulative IO count 00:09:33.700 5041.231 - 5066.437: 0.0097% ( 2) 00:09:33.700 5066.437 - 5091.643: 0.0194% ( 2) 00:09:33.700 5091.643 - 5116.849: 0.0970% ( 16) 00:09:33.700 5116.849 - 5142.055: 0.2378% ( 29) 00:09:33.700 5142.055 - 5167.262: 0.5483% ( 64) 00:09:33.700 5167.262 - 5192.468: 0.9899% ( 91) 00:09:33.700 5192.468 - 5217.674: 1.6887% ( 144) 00:09:33.700 5217.674 - 5242.880: 2.5912% ( 186) 00:09:33.700 5242.880 - 5268.086: 3.6200% ( 212) 00:09:33.700 5268.086 - 5293.292: 4.7700% ( 237) 00:09:33.700 5293.292 - 5318.498: 6.0171% ( 257) 00:09:33.700 5318.498 - 5343.705: 7.2884% ( 262) 00:09:33.700 5343.705 - 5368.911: 8.6374% ( 278) 00:09:33.700 5368.911 - 5394.117: 10.0107% ( 283) 00:09:33.700 5394.117 - 5419.323: 11.3403% ( 274) 00:09:33.700 5419.323 - 5444.529: 12.7475% ( 290) 00:09:33.700 5444.529 - 5469.735: 14.1304% ( 285) 00:09:33.700 5469.735 - 5494.942: 15.5377% ( 290) 00:09:33.700 5494.942 - 5520.148: 16.9206% ( 285) 00:09:33.700 5520.148 - 5545.354: 18.3327% ( 291) 00:09:33.700 5545.354 - 5570.560: 19.8030% ( 303) 00:09:33.700 5570.560 - 5595.766: 21.1957% ( 287) 00:09:33.700 5595.766 - 5620.972: 22.6660% ( 303) 00:09:33.700 5620.972 - 5646.178: 24.0635% ( 288) 00:09:33.700 5646.178 - 5671.385: 25.4804% ( 292) 00:09:33.700 5671.385 - 5696.591: 26.9458% ( 302) 00:09:33.700 5696.591 - 5721.797: 28.3822% ( 296) 00:09:33.700 5721.797 - 5747.003: 29.8088% ( 294) 00:09:33.700 5747.003 - 5772.209: 31.2112% ( 289) 00:09:33.700 5772.209 - 5797.415: 32.6524% ( 297) 00:09:33.700 5797.415 - 5822.622: 34.0790% ( 294) 00:09:33.700 5822.622 - 5847.828: 35.5056% ( 294) 00:09:33.700 5847.828 - 5873.034: 36.9371% ( 295) 00:09:33.700 5873.034 - 5898.240: 38.3443% ( 290) 00:09:33.700 5898.240 - 5923.446: 39.8001% ( 300) 00:09:33.700 5923.446 - 5948.652: 41.2170% ( 292) 00:09:33.700 5948.652 - 5973.858: 42.6339% ( 292) 00:09:33.700 5973.858 - 5999.065: 44.0363% ( 289) 00:09:33.700 5999.065 - 6024.271: 45.4726% ( 296) 00:09:33.700 6024.271 - 6049.477: 46.9090% ( 296) 00:09:33.700 6049.477 - 6074.683: 48.3259% ( 292) 00:09:33.700 6074.683 - 6099.889: 49.7719% ( 298) 00:09:33.700 6099.889 - 6125.095: 51.1986% ( 294) 00:09:33.700 6125.095 - 6150.302: 52.6058% ( 290) 00:09:33.700 6150.302 - 6175.508: 54.0470% ( 297) 00:09:33.700 6175.508 - 6200.714: 55.5027% ( 300) 00:09:33.700 6200.714 - 6225.920: 56.9196% ( 292) 00:09:33.700 6225.920 - 6251.126: 58.3705% ( 299) 00:09:33.701 6251.126 - 6276.332: 59.8166% ( 298) 00:09:33.701 6276.332 - 6301.538: 61.2626% ( 298) 00:09:33.701 6301.538 - 6326.745: 62.6601% ( 288) 00:09:33.701 6326.745 - 6351.951: 64.1159% ( 300) 00:09:33.701 6351.951 - 6377.157: 65.5716% ( 300) 00:09:33.701 6377.157 - 6402.363: 67.0031% ( 295) 00:09:33.701 6402.363 - 6427.569: 68.4540% ( 299) 00:09:33.701 6427.569 - 6452.775: 69.9146% ( 301) 00:09:33.701 6452.775 - 6503.188: 72.8115% ( 597) 00:09:33.701 6503.188 - 6553.600: 75.6939% ( 594) 00:09:33.701 6553.600 - 6604.012: 78.6345% ( 606) 00:09:33.701 6604.012 - 6654.425: 81.5606% ( 603) 00:09:33.701 6654.425 - 6704.837: 84.4429% ( 594) 00:09:33.701 6704.837 - 6755.249: 87.3496% ( 599) 00:09:33.701 6755.249 - 6805.662: 90.2222% ( 592) 00:09:33.701 6805.662 - 6856.074: 92.8911% ( 550) 00:09:33.701 6856.074 - 6906.486: 95.0116% ( 437) 00:09:33.701 6906.486 - 6956.898: 96.2636% ( 258) 00:09:33.701 6956.898 - 7007.311: 96.8847% ( 128) 00:09:33.701 7007.311 - 7057.723: 97.2389% ( 73) 00:09:33.701 7057.723 - 7108.135: 97.5107% ( 56) 00:09:33.701 7108.135 - 7158.548: 97.7145% ( 42) 00:09:33.701 7158.548 - 7208.960: 97.8843% ( 35) 00:09:33.701 7208.960 - 7259.372: 97.9911% ( 22) 00:09:33.701 7259.372 - 7309.785: 98.0687% ( 16) 00:09:33.701 7309.785 - 7360.197: 98.1464% ( 16) 00:09:33.701 7360.197 - 7410.609: 98.2094% ( 13) 00:09:33.701 7410.609 - 7461.022: 98.2628% ( 11) 00:09:33.701 7461.022 - 7511.434: 98.3162% ( 11) 00:09:33.701 7511.434 - 7561.846: 98.3453% ( 6) 00:09:33.701 7561.846 - 7612.258: 98.3841% ( 8) 00:09:33.701 7612.258 - 7662.671: 98.4229% ( 8) 00:09:33.701 7662.671 - 7713.083: 98.4618% ( 8) 00:09:33.701 7713.083 - 7763.495: 98.4860% ( 5) 00:09:33.701 7763.495 - 7813.908: 98.5006% ( 3) 00:09:33.701 7813.908 - 7864.320: 98.5151% ( 3) 00:09:33.701 7864.320 - 7914.732: 98.5345% ( 4) 00:09:33.701 7914.732 - 7965.145: 98.5540% ( 4) 00:09:33.701 7965.145 - 8015.557: 98.5734% ( 4) 00:09:33.701 8015.557 - 8065.969: 98.5879% ( 3) 00:09:33.701 8065.969 - 8116.382: 98.6073% ( 4) 00:09:33.701 8116.382 - 8166.794: 98.6267% ( 4) 00:09:33.701 8166.794 - 8217.206: 98.6462% ( 4) 00:09:33.701 8217.206 - 8267.618: 98.6607% ( 3) 00:09:33.701 8267.618 - 8318.031: 98.6801% ( 4) 00:09:33.701 8318.031 - 8368.443: 98.7044% ( 5) 00:09:33.701 8368.443 - 8418.855: 98.7335% ( 6) 00:09:33.701 8418.855 - 8469.268: 98.7675% ( 7) 00:09:33.701 8469.268 - 8519.680: 98.7917% ( 5) 00:09:33.701 8519.680 - 8570.092: 98.8063% ( 3) 00:09:33.701 8570.092 - 8620.505: 98.8160% ( 2) 00:09:33.701 8620.505 - 8670.917: 98.8257% ( 2) 00:09:33.701 8670.917 - 8721.329: 98.8403% ( 3) 00:09:33.701 8721.329 - 8771.742: 98.8451% ( 1) 00:09:33.701 8771.742 - 8822.154: 98.8548% ( 2) 00:09:33.701 8822.154 - 8872.566: 98.8645% ( 2) 00:09:33.701 8872.566 - 8922.978: 98.8839% ( 4) 00:09:33.701 8922.978 - 8973.391: 98.8985% ( 3) 00:09:33.701 8973.391 - 9023.803: 98.9033% ( 1) 00:09:33.701 9023.803 - 9074.215: 98.9082% ( 1) 00:09:33.701 9074.215 - 9124.628: 98.9179% ( 2) 00:09:33.701 9124.628 - 9175.040: 98.9276% ( 2) 00:09:33.701 9175.040 - 9225.452: 98.9422% ( 3) 00:09:33.701 9225.452 - 9275.865: 98.9470% ( 1) 00:09:33.701 9275.865 - 9326.277: 98.9567% ( 2) 00:09:33.701 9326.277 - 9376.689: 98.9713% ( 3) 00:09:33.701 9376.689 - 9427.102: 98.9810% ( 2) 00:09:33.701 9427.102 - 9477.514: 98.9907% ( 2) 00:09:33.701 9477.514 - 9527.926: 99.0004% ( 2) 00:09:33.701 9527.926 - 9578.338: 99.0149% ( 3) 00:09:33.701 9578.338 - 9628.751: 99.0247% ( 2) 00:09:33.701 9628.751 - 9679.163: 99.0344% ( 2) 00:09:33.701 9679.163 - 9729.575: 99.0441% ( 2) 00:09:33.701 9729.575 - 9779.988: 99.0538% ( 2) 00:09:33.701 9779.988 - 9830.400: 99.0683% ( 3) 00:09:33.701 9830.400 - 9880.812: 99.0780% ( 2) 00:09:33.701 9880.812 - 9931.225: 99.0877% ( 2) 00:09:33.701 9931.225 - 9981.637: 99.0974% ( 2) 00:09:33.701 9981.637 - 10032.049: 99.1120% ( 3) 00:09:33.701 10032.049 - 10082.462: 99.1217% ( 2) 00:09:33.701 10082.462 - 10132.874: 99.1314% ( 2) 00:09:33.701 10132.874 - 10183.286: 99.1411% ( 2) 00:09:33.701 10183.286 - 10233.698: 99.1508% ( 2) 00:09:33.701 10233.698 - 10284.111: 99.1654% ( 3) 00:09:33.701 10284.111 - 10334.523: 99.1751% ( 2) 00:09:33.701 10334.523 - 10384.935: 99.1848% ( 2) 00:09:33.701 10384.935 - 10435.348: 99.1945% ( 2) 00:09:33.701 10435.348 - 10485.760: 99.2042% ( 2) 00:09:33.701 10485.760 - 10536.172: 99.2139% ( 2) 00:09:33.701 10536.172 - 10586.585: 99.2285% ( 3) 00:09:33.701 10586.585 - 10636.997: 99.2382% ( 2) 00:09:33.701 10636.997 - 10687.409: 99.2479% ( 2) 00:09:33.701 10687.409 - 10737.822: 99.2576% ( 2) 00:09:33.701 10737.822 - 10788.234: 99.2673% ( 2) 00:09:33.701 10788.234 - 10838.646: 99.2770% ( 2) 00:09:33.701 10838.646 - 10889.058: 99.2867% ( 2) 00:09:33.701 10889.058 - 10939.471: 99.3012% ( 3) 00:09:33.701 10939.471 - 10989.883: 99.3109% ( 2) 00:09:33.701 10989.883 - 11040.295: 99.3207% ( 2) 00:09:33.701 11040.295 - 11090.708: 99.3304% ( 2) 00:09:33.701 11090.708 - 11141.120: 99.3401% ( 2) 00:09:33.701 11141.120 - 11191.532: 99.3498% ( 2) 00:09:33.701 11191.532 - 11241.945: 99.3643% ( 3) 00:09:33.701 11241.945 - 11292.357: 99.3740% ( 2) 00:09:33.701 11292.357 - 11342.769: 99.3789% ( 1) 00:09:33.701 27222.646 - 27424.295: 99.3983% ( 4) 00:09:33.701 27424.295 - 27625.945: 99.4468% ( 10) 00:09:33.701 27625.945 - 27827.594: 99.4808% ( 7) 00:09:33.701 27827.594 - 28029.243: 99.5245% ( 9) 00:09:33.701 28029.243 - 28230.892: 99.5681% ( 9) 00:09:33.701 28230.892 - 28432.542: 99.6069% ( 8) 00:09:33.701 28432.542 - 28634.191: 99.6506% ( 9) 00:09:33.701 28634.191 - 28835.840: 99.6894% ( 8) 00:09:33.701 28835.840 - 29037.489: 99.7331% ( 9) 00:09:33.701 29037.489 - 29239.138: 99.7719% ( 8) 00:09:33.701 29239.138 - 29440.788: 99.8205% ( 10) 00:09:33.701 29440.788 - 29642.437: 99.8641% ( 9) 00:09:33.701 29642.437 - 29844.086: 99.9078% ( 9) 00:09:33.701 29844.086 - 30045.735: 99.9515% ( 9) 00:09:33.701 30045.735 - 30247.385: 99.9951% ( 9) 00:09:33.701 30247.385 - 30449.034: 100.0000% ( 1) 00:09:33.701 00:09:33.701 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:33.701 ============================================================================== 00:09:33.701 Range in us Cumulative IO count 00:09:33.701 4915.200 - 4940.406: 0.0194% ( 4) 00:09:33.701 4940.406 - 4965.612: 0.0825% ( 13) 00:09:33.701 4965.612 - 4990.818: 0.2572% ( 36) 00:09:33.701 4990.818 - 5016.025: 0.4998% ( 50) 00:09:33.701 5016.025 - 5041.231: 0.9705% ( 97) 00:09:33.701 5041.231 - 5066.437: 1.5431% ( 118) 00:09:33.701 5066.437 - 5091.643: 2.3535% ( 167) 00:09:33.701 5091.643 - 5116.849: 3.3288% ( 201) 00:09:33.701 5116.849 - 5142.055: 4.2896% ( 198) 00:09:33.701 5142.055 - 5167.262: 5.3717% ( 223) 00:09:33.701 5167.262 - 5192.468: 6.6091% ( 255) 00:09:33.701 5192.468 - 5217.674: 7.7155% ( 228) 00:09:33.701 5217.674 - 5242.880: 8.8315% ( 230) 00:09:33.701 5242.880 - 5268.086: 9.9330% ( 227) 00:09:33.701 5268.086 - 5293.292: 11.0345% ( 227) 00:09:33.701 5293.292 - 5318.498: 12.1749% ( 235) 00:09:33.701 5318.498 - 5343.705: 13.3929% ( 251) 00:09:33.701 5343.705 - 5368.911: 14.4992% ( 228) 00:09:33.701 5368.911 - 5394.117: 15.7706% ( 262) 00:09:33.701 5394.117 - 5419.323: 16.9158% ( 236) 00:09:33.701 5419.323 - 5444.529: 18.1677% ( 258) 00:09:33.701 5444.529 - 5469.735: 19.2886% ( 231) 00:09:33.701 5469.735 - 5494.942: 20.5066% ( 251) 00:09:33.701 5494.942 - 5520.148: 21.6955% ( 245) 00:09:33.701 5520.148 - 5545.354: 22.9425% ( 257) 00:09:33.701 5545.354 - 5570.560: 24.1508% ( 249) 00:09:33.701 5570.560 - 5595.766: 25.4125% ( 260) 00:09:33.701 5595.766 - 5620.972: 26.5334% ( 231) 00:09:33.701 5620.972 - 5646.178: 27.7853% ( 258) 00:09:33.701 5646.178 - 5671.385: 29.0615% ( 263) 00:09:33.701 5671.385 - 5696.591: 30.2649% ( 248) 00:09:33.701 5696.591 - 5721.797: 31.5120% ( 257) 00:09:33.701 5721.797 - 5747.003: 32.7446% ( 254) 00:09:33.701 5747.003 - 5772.209: 33.9480% ( 248) 00:09:33.701 5772.209 - 5797.415: 35.1854% ( 255) 00:09:33.701 5797.415 - 5822.622: 36.4130% ( 253) 00:09:33.701 5822.622 - 5847.828: 37.6601% ( 257) 00:09:33.701 5847.828 - 5873.034: 38.8005% ( 235) 00:09:33.701 5873.034 - 5898.240: 40.1300% ( 274) 00:09:33.701 5898.240 - 5923.446: 41.1927% ( 219) 00:09:33.701 5923.446 - 5948.652: 42.5417% ( 278) 00:09:33.701 5948.652 - 5973.858: 43.7112% ( 241) 00:09:33.701 5973.858 - 5999.065: 44.9097% ( 247) 00:09:33.701 5999.065 - 6024.271: 46.1665% ( 259) 00:09:33.701 6024.271 - 6049.477: 47.3894% ( 252) 00:09:33.701 6049.477 - 6074.683: 48.5637% ( 242) 00:09:33.701 6074.683 - 6099.889: 49.7622% ( 247) 00:09:33.701 6099.889 - 6125.095: 50.9948% ( 254) 00:09:33.701 6125.095 - 6150.302: 52.2079% ( 250) 00:09:33.701 6150.302 - 6175.508: 53.4113% ( 248) 00:09:33.701 6175.508 - 6200.714: 54.5953% ( 244) 00:09:33.701 6200.714 - 6225.920: 55.8472% ( 258) 00:09:33.701 6225.920 - 6251.126: 57.0604% ( 250) 00:09:33.701 6251.126 - 6276.332: 58.2929% ( 254) 00:09:33.701 6276.332 - 6301.538: 59.4866% ( 246) 00:09:33.701 6301.538 - 6326.745: 60.8016% ( 271) 00:09:33.701 6326.745 - 6351.951: 61.9953% ( 246) 00:09:33.701 6351.951 - 6377.157: 63.2424% ( 257) 00:09:33.701 6377.157 - 6402.363: 64.4022% ( 239) 00:09:33.701 6402.363 - 6427.569: 65.6735% ( 262) 00:09:33.701 6427.569 - 6452.775: 66.8575% ( 244) 00:09:33.701 6452.775 - 6503.188: 69.3566% ( 515) 00:09:33.701 6503.188 - 6553.600: 71.8119% ( 506) 00:09:33.701 6553.600 - 6604.012: 74.2382% ( 500) 00:09:33.701 6604.012 - 6654.425: 76.6741% ( 502) 00:09:33.701 6654.425 - 6704.837: 79.1392% ( 508) 00:09:33.701 6704.837 - 6755.249: 81.6285% ( 513) 00:09:33.701 6755.249 - 6805.662: 84.0839% ( 506) 00:09:33.701 6805.662 - 6856.074: 86.5441% ( 507) 00:09:33.702 6856.074 - 6906.486: 89.0576% ( 518) 00:09:33.702 6906.486 - 6956.898: 91.4693% ( 497) 00:09:33.702 6956.898 - 7007.311: 93.7112% ( 462) 00:09:33.702 7007.311 - 7057.723: 95.5891% ( 387) 00:09:33.702 7057.723 - 7108.135: 96.6421% ( 217) 00:09:33.702 7108.135 - 7158.548: 97.1613% ( 107) 00:09:33.702 7158.548 - 7208.960: 97.4622% ( 62) 00:09:33.702 7208.960 - 7259.372: 97.7048% ( 50) 00:09:33.702 7259.372 - 7309.785: 97.9037% ( 41) 00:09:33.702 7309.785 - 7360.197: 98.0493% ( 30) 00:09:33.702 7360.197 - 7410.609: 98.1512% ( 21) 00:09:33.702 7410.609 - 7461.022: 98.2143% ( 13) 00:09:33.702 7461.022 - 7511.434: 98.2822% ( 14) 00:09:33.702 7511.434 - 7561.846: 98.3405% ( 12) 00:09:33.702 7561.846 - 7612.258: 98.3987% ( 12) 00:09:33.702 7612.258 - 7662.671: 98.4326% ( 7) 00:09:33.702 7662.671 - 7713.083: 98.4618% ( 6) 00:09:33.702 7713.083 - 7763.495: 98.4763% ( 3) 00:09:33.702 7763.495 - 7813.908: 98.4909% ( 3) 00:09:33.702 7813.908 - 7864.320: 98.5006% ( 2) 00:09:33.702 7864.320 - 7914.732: 98.5151% ( 3) 00:09:33.702 7914.732 - 7965.145: 98.5345% ( 4) 00:09:33.702 7965.145 - 8015.557: 98.5491% ( 3) 00:09:33.702 8015.557 - 8065.969: 98.5588% ( 2) 00:09:33.702 8065.969 - 8116.382: 98.5782% ( 4) 00:09:33.702 8116.382 - 8166.794: 98.5928% ( 3) 00:09:33.702 8166.794 - 8217.206: 98.6122% ( 4) 00:09:33.702 8217.206 - 8267.618: 98.6267% ( 3) 00:09:33.702 8267.618 - 8318.031: 98.6462% ( 4) 00:09:33.702 8318.031 - 8368.443: 98.6559% ( 2) 00:09:33.702 8368.443 - 8418.855: 98.6704% ( 3) 00:09:33.702 8418.855 - 8469.268: 98.6850% ( 3) 00:09:33.702 8469.268 - 8519.680: 98.6947% ( 2) 00:09:33.702 8519.680 - 8570.092: 98.7141% ( 4) 00:09:33.702 8570.092 - 8620.505: 98.7189% ( 1) 00:09:33.702 8620.505 - 8670.917: 98.7384% ( 4) 00:09:33.702 8670.917 - 8721.329: 98.7578% ( 4) 00:09:33.702 9477.514 - 9527.926: 98.7675% ( 2) 00:09:33.702 9527.926 - 9578.338: 98.7723% ( 1) 00:09:33.702 9578.338 - 9628.751: 98.7869% ( 3) 00:09:33.702 9628.751 - 9679.163: 98.7966% ( 2) 00:09:33.702 9679.163 - 9729.575: 98.8111% ( 3) 00:09:33.702 9729.575 - 9779.988: 98.8160% ( 1) 00:09:33.702 9779.988 - 9830.400: 98.8257% ( 2) 00:09:33.702 9830.400 - 9880.812: 98.8354% ( 2) 00:09:33.702 9880.812 - 9931.225: 98.8451% ( 2) 00:09:33.702 9931.225 - 9981.637: 98.8548% ( 2) 00:09:33.702 9981.637 - 10032.049: 98.8645% ( 2) 00:09:33.702 10032.049 - 10082.462: 98.8742% ( 2) 00:09:33.702 10082.462 - 10132.874: 98.8839% ( 2) 00:09:33.702 10132.874 - 10183.286: 98.8936% ( 2) 00:09:33.702 10183.286 - 10233.698: 98.9033% ( 2) 00:09:33.702 10233.698 - 10284.111: 98.9130% ( 2) 00:09:33.702 10284.111 - 10334.523: 98.9179% ( 1) 00:09:33.702 10334.523 - 10384.935: 98.9325% ( 3) 00:09:33.702 10384.935 - 10435.348: 98.9422% ( 2) 00:09:33.702 10435.348 - 10485.760: 98.9470% ( 1) 00:09:33.702 10485.760 - 10536.172: 98.9616% ( 3) 00:09:33.702 10536.172 - 10586.585: 98.9713% ( 2) 00:09:33.702 10586.585 - 10636.997: 98.9810% ( 2) 00:09:33.702 10636.997 - 10687.409: 98.9907% ( 2) 00:09:33.702 10687.409 - 10737.822: 99.0004% ( 2) 00:09:33.702 10737.822 - 10788.234: 99.0101% ( 2) 00:09:33.702 10788.234 - 10838.646: 99.0198% ( 2) 00:09:33.702 10838.646 - 10889.058: 99.0295% ( 2) 00:09:33.702 10889.058 - 10939.471: 99.0392% ( 2) 00:09:33.702 10939.471 - 10989.883: 99.0441% ( 1) 00:09:33.702 10989.883 - 11040.295: 99.0635% ( 4) 00:09:33.702 11090.708 - 11141.120: 99.0829% ( 4) 00:09:33.702 11191.532 - 11241.945: 99.0974% ( 3) 00:09:33.702 11241.945 - 11292.357: 99.1071% ( 2) 00:09:33.702 11292.357 - 11342.769: 99.1120% ( 1) 00:09:33.702 11342.769 - 11393.182: 99.1314% ( 4) 00:09:33.702 11443.594 - 11494.006: 99.1508% ( 4) 00:09:33.702 11494.006 - 11544.418: 99.1557% ( 1) 00:09:33.702 11544.418 - 11594.831: 99.1654% ( 2) 00:09:33.702 11594.831 - 11645.243: 99.1702% ( 1) 00:09:33.702 11645.243 - 11695.655: 99.1799% ( 2) 00:09:33.702 11695.655 - 11746.068: 99.1896% ( 2) 00:09:33.702 11746.068 - 11796.480: 99.1993% ( 2) 00:09:33.702 11796.480 - 11846.892: 99.2090% ( 2) 00:09:33.702 11846.892 - 11897.305: 99.2188% ( 2) 00:09:33.702 11897.305 - 11947.717: 99.2285% ( 2) 00:09:33.702 11947.717 - 11998.129: 99.2382% ( 2) 00:09:33.702 11998.129 - 12048.542: 99.2527% ( 3) 00:09:33.702 12048.542 - 12098.954: 99.2576% ( 1) 00:09:33.702 12098.954 - 12149.366: 99.2673% ( 2) 00:09:33.702 12149.366 - 12199.778: 99.2770% ( 2) 00:09:33.702 12199.778 - 12250.191: 99.2867% ( 2) 00:09:33.702 12250.191 - 12300.603: 99.3012% ( 3) 00:09:33.702 12300.603 - 12351.015: 99.3109% ( 2) 00:09:33.702 12351.015 - 12401.428: 99.3158% ( 1) 00:09:33.702 12401.428 - 12451.840: 99.3255% ( 2) 00:09:33.702 12451.840 - 12502.252: 99.3352% ( 2) 00:09:33.702 12502.252 - 12552.665: 99.3498% ( 3) 00:09:33.702 12552.665 - 12603.077: 99.3546% ( 1) 00:09:33.702 12603.077 - 12653.489: 99.3643% ( 2) 00:09:33.702 12653.489 - 12703.902: 99.3740% ( 2) 00:09:33.702 12703.902 - 12754.314: 99.3789% ( 1) 00:09:33.702 26012.751 - 26214.400: 99.3886% ( 2) 00:09:33.702 26214.400 - 26416.049: 99.4274% ( 8) 00:09:33.702 26416.049 - 26617.698: 99.4614% ( 7) 00:09:33.702 26617.698 - 26819.348: 99.5002% ( 8) 00:09:33.702 26819.348 - 27020.997: 99.5342% ( 7) 00:09:33.702 27020.997 - 27222.646: 99.5681% ( 7) 00:09:33.702 27222.646 - 27424.295: 99.6069% ( 8) 00:09:33.702 27424.295 - 27625.945: 99.6506% ( 9) 00:09:33.702 27625.945 - 27827.594: 99.6846% ( 7) 00:09:33.702 27827.594 - 28029.243: 99.7234% ( 8) 00:09:33.702 28029.243 - 28230.892: 99.7574% ( 7) 00:09:33.702 28230.892 - 28432.542: 99.7962% ( 8) 00:09:33.702 28432.542 - 28634.191: 99.8350% ( 8) 00:09:33.702 28634.191 - 28835.840: 99.8738% ( 8) 00:09:33.702 28835.840 - 29037.489: 99.9127% ( 8) 00:09:33.702 29037.489 - 29239.138: 99.9515% ( 8) 00:09:33.702 29239.138 - 29440.788: 99.9903% ( 8) 00:09:33.702 29440.788 - 29642.437: 100.0000% ( 2) 00:09:33.702 00:09:33.702 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:33.702 ============================================================================== 00:09:33.702 Range in us Cumulative IO count 00:09:33.702 5041.231 - 5066.437: 0.0485% ( 10) 00:09:33.702 5066.437 - 5091.643: 0.0825% ( 7) 00:09:33.702 5091.643 - 5116.849: 0.1747% ( 19) 00:09:33.702 5116.849 - 5142.055: 0.3397% ( 34) 00:09:33.702 5142.055 - 5167.262: 0.5677% ( 47) 00:09:33.702 5167.262 - 5192.468: 1.0190% ( 93) 00:09:33.702 5192.468 - 5217.674: 1.7275% ( 146) 00:09:33.702 5217.674 - 5242.880: 2.5912% ( 178) 00:09:33.702 5242.880 - 5268.086: 3.7219% ( 233) 00:09:33.702 5268.086 - 5293.292: 5.0369% ( 271) 00:09:33.702 5293.292 - 5318.498: 6.3228% ( 265) 00:09:33.702 5318.498 - 5343.705: 7.5214% ( 247) 00:09:33.702 5343.705 - 5368.911: 8.7490% ( 253) 00:09:33.702 5368.911 - 5394.117: 10.0786% ( 274) 00:09:33.702 5394.117 - 5419.323: 11.3985% ( 272) 00:09:33.702 5419.323 - 5444.529: 12.8009% ( 289) 00:09:33.702 5444.529 - 5469.735: 14.1838% ( 285) 00:09:33.702 5469.735 - 5494.942: 15.6201% ( 296) 00:09:33.702 5494.942 - 5520.148: 17.1390% ( 313) 00:09:33.702 5520.148 - 5545.354: 18.5802% ( 297) 00:09:33.702 5545.354 - 5570.560: 19.9486% ( 282) 00:09:33.702 5570.560 - 5595.766: 21.3849% ( 296) 00:09:33.702 5595.766 - 5620.972: 22.8309% ( 298) 00:09:33.702 5620.972 - 5646.178: 24.2285% ( 288) 00:09:33.702 5646.178 - 5671.385: 25.6405% ( 291) 00:09:33.702 5671.385 - 5696.591: 27.0477% ( 290) 00:09:33.702 5696.591 - 5721.797: 28.4501% ( 289) 00:09:33.702 5721.797 - 5747.003: 29.9010% ( 299) 00:09:33.702 5747.003 - 5772.209: 31.3373% ( 296) 00:09:33.702 5772.209 - 5797.415: 32.7494% ( 291) 00:09:33.702 5797.415 - 5822.622: 34.1469% ( 288) 00:09:33.702 5822.622 - 5847.828: 35.5784% ( 295) 00:09:33.702 5847.828 - 5873.034: 37.0050% ( 294) 00:09:33.702 5873.034 - 5898.240: 38.4365% ( 295) 00:09:33.702 5898.240 - 5923.446: 39.8826% ( 298) 00:09:33.702 5923.446 - 5948.652: 41.3141% ( 295) 00:09:33.702 5948.652 - 5973.858: 42.7649% ( 299) 00:09:33.702 5973.858 - 5999.065: 44.1916% ( 294) 00:09:33.702 5999.065 - 6024.271: 45.6134% ( 293) 00:09:33.702 6024.271 - 6049.477: 47.0157% ( 289) 00:09:33.702 6049.477 - 6074.683: 48.4424% ( 294) 00:09:33.702 6074.683 - 6099.889: 49.8787% ( 296) 00:09:33.702 6099.889 - 6125.095: 51.3247% ( 298) 00:09:33.702 6125.095 - 6150.302: 52.7611% ( 296) 00:09:33.702 6150.302 - 6175.508: 54.2023% ( 297) 00:09:33.702 6175.508 - 6200.714: 55.6483% ( 298) 00:09:33.702 6200.714 - 6225.920: 57.0701% ( 293) 00:09:33.702 6225.920 - 6251.126: 58.4433% ( 283) 00:09:33.702 6251.126 - 6276.332: 59.8845% ( 297) 00:09:33.702 6276.332 - 6301.538: 61.3208% ( 296) 00:09:33.702 6301.538 - 6326.745: 62.7329% ( 291) 00:09:33.702 6326.745 - 6351.951: 64.1595% ( 294) 00:09:33.702 6351.951 - 6377.157: 65.5959% ( 296) 00:09:33.702 6377.157 - 6402.363: 67.0371% ( 297) 00:09:33.702 6402.363 - 6427.569: 68.4977% ( 301) 00:09:33.702 6427.569 - 6452.775: 69.9680% ( 303) 00:09:33.702 6452.775 - 6503.188: 72.8601% ( 596) 00:09:33.702 6503.188 - 6553.600: 75.7570% ( 597) 00:09:33.702 6553.600 - 6604.012: 78.6151% ( 589) 00:09:33.702 6604.012 - 6654.425: 81.5266% ( 600) 00:09:33.702 6654.425 - 6704.837: 84.4720% ( 607) 00:09:33.702 6704.837 - 6755.249: 87.3350% ( 590) 00:09:33.702 6755.249 - 6805.662: 90.1737% ( 585) 00:09:33.702 6805.662 - 6856.074: 92.8183% ( 545) 00:09:33.702 6856.074 - 6906.486: 94.9728% ( 444) 00:09:33.702 6906.486 - 6956.898: 96.2781% ( 269) 00:09:33.702 6956.898 - 7007.311: 96.9284% ( 134) 00:09:33.702 7007.311 - 7057.723: 97.2875% ( 74) 00:09:33.702 7057.723 - 7108.135: 97.5641% ( 57) 00:09:33.702 7108.135 - 7158.548: 97.7970% ( 48) 00:09:33.702 7158.548 - 7208.960: 97.9668% ( 35) 00:09:33.702 7208.960 - 7259.372: 98.0639% ( 20) 00:09:33.702 7259.372 - 7309.785: 98.1561% ( 19) 00:09:33.702 7309.785 - 7360.197: 98.2337% ( 16) 00:09:33.702 7360.197 - 7410.609: 98.3065% ( 15) 00:09:33.703 7410.609 - 7461.022: 98.3647% ( 12) 00:09:33.703 7461.022 - 7511.434: 98.3938% ( 6) 00:09:33.703 7511.434 - 7561.846: 98.4132% ( 4) 00:09:33.703 7561.846 - 7612.258: 98.4278% ( 3) 00:09:33.703 7612.258 - 7662.671: 98.4472% ( 4) 00:09:33.703 7662.671 - 7713.083: 98.4666% ( 4) 00:09:33.703 7713.083 - 7763.495: 98.4812% ( 3) 00:09:33.703 7763.495 - 7813.908: 98.5006% ( 4) 00:09:33.703 7813.908 - 7864.320: 98.5200% ( 4) 00:09:33.703 7864.320 - 7914.732: 98.5394% ( 4) 00:09:33.703 7914.732 - 7965.145: 98.5588% ( 4) 00:09:33.703 7965.145 - 8015.557: 98.5782% ( 4) 00:09:33.703 8015.557 - 8065.969: 98.5928% ( 3) 00:09:33.703 8065.969 - 8116.382: 98.6122% ( 4) 00:09:33.703 8116.382 - 8166.794: 98.6267% ( 3) 00:09:33.703 8166.794 - 8217.206: 98.6413% ( 3) 00:09:33.703 8217.206 - 8267.618: 98.6607% ( 4) 00:09:33.703 8267.618 - 8318.031: 98.6801% ( 4) 00:09:33.703 8318.031 - 8368.443: 98.6947% ( 3) 00:09:33.703 8368.443 - 8418.855: 98.7141% ( 4) 00:09:33.703 8418.855 - 8469.268: 98.7335% ( 4) 00:09:33.703 8469.268 - 8519.680: 98.7529% ( 4) 00:09:33.703 8519.680 - 8570.092: 98.7578% ( 1) 00:09:33.703 10889.058 - 10939.471: 98.7675% ( 2) 00:09:33.703 10939.471 - 10989.883: 98.7772% ( 2) 00:09:33.703 10989.883 - 11040.295: 98.7869% ( 2) 00:09:33.703 11040.295 - 11090.708: 98.8014% ( 3) 00:09:33.703 11090.708 - 11141.120: 98.8111% ( 2) 00:09:33.703 11141.120 - 11191.532: 98.8208% ( 2) 00:09:33.703 11191.532 - 11241.945: 98.8257% ( 1) 00:09:33.703 11241.945 - 11292.357: 98.8354% ( 2) 00:09:33.703 11292.357 - 11342.769: 98.8548% ( 4) 00:09:33.703 11342.769 - 11393.182: 98.8645% ( 2) 00:09:33.703 11393.182 - 11443.594: 98.8742% ( 2) 00:09:33.703 11443.594 - 11494.006: 98.8888% ( 3) 00:09:33.703 11494.006 - 11544.418: 98.8985% ( 2) 00:09:33.703 11544.418 - 11594.831: 98.9033% ( 1) 00:09:33.703 11594.831 - 11645.243: 98.9130% ( 2) 00:09:33.703 11645.243 - 11695.655: 98.9276% ( 3) 00:09:33.703 11695.655 - 11746.068: 98.9373% ( 2) 00:09:33.703 11746.068 - 11796.480: 98.9470% ( 2) 00:09:33.703 11796.480 - 11846.892: 98.9567% ( 2) 00:09:33.703 11846.892 - 11897.305: 98.9664% ( 2) 00:09:33.703 11897.305 - 11947.717: 98.9858% ( 4) 00:09:33.703 11947.717 - 11998.129: 99.0004% ( 3) 00:09:33.703 11998.129 - 12048.542: 99.0101% ( 2) 00:09:33.703 12048.542 - 12098.954: 99.0198% ( 2) 00:09:33.703 12098.954 - 12149.366: 99.0295% ( 2) 00:09:33.703 12149.366 - 12199.778: 99.0392% ( 2) 00:09:33.703 12199.778 - 12250.191: 99.0538% ( 3) 00:09:33.703 12250.191 - 12300.603: 99.0635% ( 2) 00:09:33.703 12351.015 - 12401.428: 99.0732% ( 2) 00:09:33.703 12401.428 - 12451.840: 99.0829% ( 2) 00:09:33.703 12451.840 - 12502.252: 99.0974% ( 3) 00:09:33.703 12502.252 - 12552.665: 99.1071% ( 2) 00:09:33.703 12552.665 - 12603.077: 99.1168% ( 2) 00:09:33.703 12603.077 - 12653.489: 99.1266% ( 2) 00:09:33.703 12653.489 - 12703.902: 99.1363% ( 2) 00:09:33.703 12703.902 - 12754.314: 99.1460% ( 2) 00:09:33.703 12754.314 - 12804.726: 99.1557% ( 2) 00:09:33.703 12804.726 - 12855.138: 99.1702% ( 3) 00:09:33.703 12855.138 - 12905.551: 99.1799% ( 2) 00:09:33.703 12905.551 - 13006.375: 99.1993% ( 4) 00:09:33.703 13006.375 - 13107.200: 99.2188% ( 4) 00:09:33.703 13107.200 - 13208.025: 99.2430% ( 5) 00:09:33.703 13208.025 - 13308.849: 99.2624% ( 4) 00:09:33.703 13308.849 - 13409.674: 99.2867% ( 5) 00:09:33.703 13409.674 - 13510.498: 99.3061% ( 4) 00:09:33.703 13510.498 - 13611.323: 99.3255% ( 4) 00:09:33.703 13611.323 - 13712.148: 99.3498% ( 5) 00:09:33.703 13712.148 - 13812.972: 99.3692% ( 4) 00:09:33.703 13812.972 - 13913.797: 99.3789% ( 2) 00:09:33.703 24903.680 - 25004.505: 99.3837% ( 1) 00:09:33.703 25004.505 - 25105.329: 99.4031% ( 4) 00:09:33.703 25105.329 - 25206.154: 99.4274% ( 5) 00:09:33.703 25206.154 - 25306.978: 99.4468% ( 4) 00:09:33.703 25306.978 - 25407.803: 99.4662% ( 4) 00:09:33.703 25407.803 - 25508.628: 99.4856% ( 4) 00:09:33.703 25508.628 - 25609.452: 99.5050% ( 4) 00:09:33.703 25609.452 - 25710.277: 99.5293% ( 5) 00:09:33.703 25710.277 - 25811.102: 99.5487% ( 4) 00:09:33.703 25811.102 - 26012.751: 99.5875% ( 8) 00:09:33.703 26012.751 - 26214.400: 99.6264% ( 8) 00:09:33.703 26214.400 - 26416.049: 99.6652% ( 8) 00:09:33.703 26416.049 - 26617.698: 99.7040% ( 8) 00:09:33.703 26617.698 - 26819.348: 99.7428% ( 8) 00:09:33.703 26819.348 - 27020.997: 99.7865% ( 9) 00:09:33.703 27020.997 - 27222.646: 99.8253% ( 8) 00:09:33.703 27222.646 - 27424.295: 99.8641% ( 8) 00:09:33.703 27424.295 - 27625.945: 99.9078% ( 9) 00:09:33.703 27625.945 - 27827.594: 99.9466% ( 8) 00:09:33.703 27827.594 - 28029.243: 99.9903% ( 9) 00:09:33.703 28029.243 - 28230.892: 100.0000% ( 2) 00:09:33.703 00:09:33.703 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:33.703 ============================================================================== 00:09:33.703 Range in us Cumulative IO count 00:09:33.703 5091.643 - 5116.849: 0.0194% ( 4) 00:09:33.703 5116.849 - 5142.055: 0.1456% ( 26) 00:09:33.703 5142.055 - 5167.262: 0.4028% ( 53) 00:09:33.703 5167.262 - 5192.468: 0.8734% ( 97) 00:09:33.703 5192.468 - 5217.674: 1.7129% ( 173) 00:09:33.703 5217.674 - 5242.880: 2.7756% ( 219) 00:09:33.703 5242.880 - 5268.086: 3.8723% ( 226) 00:09:33.703 5268.086 - 5293.292: 5.0029% ( 233) 00:09:33.703 5293.292 - 5318.498: 6.3179% ( 271) 00:09:33.703 5318.498 - 5343.705: 7.6572% ( 276) 00:09:33.703 5343.705 - 5368.911: 9.0353% ( 284) 00:09:33.703 5368.911 - 5394.117: 10.4571% ( 293) 00:09:33.703 5394.117 - 5419.323: 11.8886% ( 295) 00:09:33.703 5419.323 - 5444.529: 13.2812% ( 287) 00:09:33.703 5444.529 - 5469.735: 14.6594% ( 284) 00:09:33.703 5469.735 - 5494.942: 16.0811% ( 293) 00:09:33.703 5494.942 - 5520.148: 17.4544% ( 283) 00:09:33.703 5520.148 - 5545.354: 18.8810% ( 294) 00:09:33.703 5545.354 - 5570.560: 20.3076% ( 294) 00:09:33.703 5570.560 - 5595.766: 21.7246% ( 292) 00:09:33.703 5595.766 - 5620.972: 23.1464% ( 293) 00:09:33.703 5620.972 - 5646.178: 24.5487% ( 289) 00:09:33.703 5646.178 - 5671.385: 25.9171% ( 282) 00:09:33.703 5671.385 - 5696.591: 27.3438% ( 294) 00:09:33.703 5696.591 - 5721.797: 28.7849% ( 297) 00:09:33.703 5721.797 - 5747.003: 30.1873% ( 289) 00:09:33.703 5747.003 - 5772.209: 31.5751% ( 286) 00:09:33.703 5772.209 - 5797.415: 32.9872% ( 291) 00:09:33.703 5797.415 - 5822.622: 34.3701% ( 285) 00:09:33.703 5822.622 - 5847.828: 35.7774% ( 290) 00:09:33.703 5847.828 - 5873.034: 37.1846% ( 290) 00:09:33.703 5873.034 - 5898.240: 38.6064% ( 293) 00:09:33.703 5898.240 - 5923.446: 40.0184% ( 291) 00:09:33.703 5923.446 - 5948.652: 41.4499% ( 295) 00:09:33.703 5948.652 - 5973.858: 42.8523% ( 289) 00:09:33.703 5973.858 - 5999.065: 44.2595% ( 290) 00:09:33.703 5999.065 - 6024.271: 45.7056% ( 298) 00:09:33.703 6024.271 - 6049.477: 47.1322% ( 294) 00:09:33.703 6049.477 - 6074.683: 48.5685% ( 296) 00:09:33.703 6074.683 - 6099.889: 49.9709% ( 289) 00:09:33.703 6099.889 - 6125.095: 51.4218% ( 299) 00:09:33.703 6125.095 - 6150.302: 52.8727% ( 299) 00:09:33.703 6150.302 - 6175.508: 54.2944% ( 293) 00:09:33.703 6175.508 - 6200.714: 55.7405% ( 298) 00:09:33.703 6200.714 - 6225.920: 57.1526% ( 291) 00:09:33.703 6225.920 - 6251.126: 58.5646% ( 291) 00:09:33.703 6251.126 - 6276.332: 59.9961% ( 295) 00:09:33.703 6276.332 - 6301.538: 61.4325% ( 296) 00:09:33.703 6301.538 - 6326.745: 62.8736% ( 297) 00:09:33.703 6326.745 - 6351.951: 64.3342% ( 301) 00:09:33.703 6351.951 - 6377.157: 65.7851% ( 299) 00:09:33.703 6377.157 - 6402.363: 67.1826% ( 288) 00:09:33.703 6402.363 - 6427.569: 68.6190% ( 296) 00:09:33.703 6427.569 - 6452.775: 70.0602% ( 297) 00:09:33.703 6452.775 - 6503.188: 72.9523% ( 596) 00:09:33.703 6503.188 - 6553.600: 75.7910% ( 585) 00:09:33.703 6553.600 - 6604.012: 78.6102% ( 581) 00:09:33.703 6604.012 - 6654.425: 81.4150% ( 578) 00:09:33.703 6654.425 - 6704.837: 84.2246% ( 579) 00:09:33.703 6704.837 - 6755.249: 87.0196% ( 576) 00:09:33.703 6755.249 - 6805.662: 89.7952% ( 572) 00:09:33.703 6805.662 - 6856.074: 92.3573% ( 528) 00:09:33.703 6856.074 - 6906.486: 94.2304% ( 386) 00:09:33.703 6906.486 - 6956.898: 95.4095% ( 243) 00:09:33.703 6956.898 - 7007.311: 96.1277% ( 148) 00:09:33.703 7007.311 - 7057.723: 96.5693% ( 91) 00:09:33.703 7057.723 - 7108.135: 96.9526% ( 79) 00:09:33.703 7108.135 - 7158.548: 97.2486% ( 61) 00:09:33.703 7158.548 - 7208.960: 97.4719% ( 46) 00:09:33.703 7208.960 - 7259.372: 97.6562% ( 38) 00:09:33.703 7259.372 - 7309.785: 97.7970% ( 29) 00:09:33.703 7309.785 - 7360.197: 97.9134% ( 24) 00:09:33.703 7360.197 - 7410.609: 97.9862% ( 15) 00:09:33.703 7410.609 - 7461.022: 98.0396% ( 11) 00:09:33.703 7461.022 - 7511.434: 98.0833% ( 9) 00:09:33.703 7511.434 - 7561.846: 98.1221% ( 8) 00:09:33.703 7561.846 - 7612.258: 98.1464% ( 5) 00:09:33.703 7612.258 - 7662.671: 98.1755% ( 6) 00:09:33.703 7662.671 - 7713.083: 98.2046% ( 6) 00:09:33.703 7713.083 - 7763.495: 98.2337% ( 6) 00:09:33.703 7763.495 - 7813.908: 98.2677% ( 7) 00:09:33.703 7813.908 - 7864.320: 98.2968% ( 6) 00:09:33.703 7864.320 - 7914.732: 98.3259% ( 6) 00:09:33.703 7914.732 - 7965.145: 98.3550% ( 6) 00:09:33.703 7965.145 - 8015.557: 98.3841% ( 6) 00:09:33.703 8015.557 - 8065.969: 98.4132% ( 6) 00:09:33.703 8065.969 - 8116.382: 98.4424% ( 6) 00:09:33.703 8116.382 - 8166.794: 98.4666% ( 5) 00:09:33.703 8166.794 - 8217.206: 98.4957% ( 6) 00:09:33.703 8217.206 - 8267.618: 98.5248% ( 6) 00:09:33.703 8267.618 - 8318.031: 98.5540% ( 6) 00:09:33.703 8318.031 - 8368.443: 98.5831% ( 6) 00:09:33.703 8368.443 - 8418.855: 98.6073% ( 5) 00:09:33.703 8418.855 - 8469.268: 98.6365% ( 6) 00:09:33.703 8469.268 - 8519.680: 98.6607% ( 5) 00:09:33.703 8519.680 - 8570.092: 98.6801% ( 4) 00:09:33.703 8570.092 - 8620.505: 98.6995% ( 4) 00:09:33.703 8620.505 - 8670.917: 98.7189% ( 4) 00:09:33.704 8670.917 - 8721.329: 98.7384% ( 4) 00:09:33.704 8721.329 - 8771.742: 98.7578% ( 4) 00:09:33.704 9931.225 - 9981.637: 98.7626% ( 1) 00:09:33.704 9981.637 - 10032.049: 98.7820% ( 4) 00:09:33.704 10032.049 - 10082.462: 98.7869% ( 1) 00:09:33.704 10082.462 - 10132.874: 98.7917% ( 1) 00:09:33.704 10132.874 - 10183.286: 98.7966% ( 1) 00:09:33.704 10183.286 - 10233.698: 98.8014% ( 1) 00:09:33.704 10233.698 - 10284.111: 98.8063% ( 1) 00:09:33.704 10284.111 - 10334.523: 98.8111% ( 1) 00:09:33.704 10334.523 - 10384.935: 98.8208% ( 2) 00:09:33.704 10384.935 - 10435.348: 98.8257% ( 1) 00:09:33.704 10435.348 - 10485.760: 98.8306% ( 1) 00:09:33.704 10485.760 - 10536.172: 98.8403% ( 2) 00:09:33.704 10536.172 - 10586.585: 98.8548% ( 3) 00:09:33.704 10586.585 - 10636.997: 98.8645% ( 2) 00:09:33.704 10636.997 - 10687.409: 98.8742% ( 2) 00:09:33.704 10687.409 - 10737.822: 98.8791% ( 1) 00:09:33.704 10737.822 - 10788.234: 98.8888% ( 2) 00:09:33.704 10788.234 - 10838.646: 98.8936% ( 1) 00:09:33.704 10838.646 - 10889.058: 98.9033% ( 2) 00:09:33.704 10889.058 - 10939.471: 98.9082% ( 1) 00:09:33.704 10939.471 - 10989.883: 98.9179% ( 2) 00:09:33.704 10989.883 - 11040.295: 98.9227% ( 1) 00:09:33.704 11040.295 - 11090.708: 98.9276% ( 1) 00:09:33.704 11090.708 - 11141.120: 98.9325% ( 1) 00:09:33.704 11141.120 - 11191.532: 98.9422% ( 2) 00:09:33.704 11191.532 - 11241.945: 98.9470% ( 1) 00:09:33.704 11241.945 - 11292.357: 98.9567% ( 2) 00:09:33.704 11292.357 - 11342.769: 98.9616% ( 1) 00:09:33.704 11342.769 - 11393.182: 98.9713% ( 2) 00:09:33.704 11393.182 - 11443.594: 98.9761% ( 1) 00:09:33.704 11443.594 - 11494.006: 98.9858% ( 2) 00:09:33.704 11494.006 - 11544.418: 98.9907% ( 1) 00:09:33.704 11544.418 - 11594.831: 99.0004% ( 2) 00:09:33.704 11594.831 - 11645.243: 99.0052% ( 1) 00:09:33.704 11645.243 - 11695.655: 99.0149% ( 2) 00:09:33.704 11695.655 - 11746.068: 99.0198% ( 1) 00:09:33.704 11746.068 - 11796.480: 99.0295% ( 2) 00:09:33.704 11796.480 - 11846.892: 99.0344% ( 1) 00:09:33.704 11846.892 - 11897.305: 99.0392% ( 1) 00:09:33.704 11897.305 - 11947.717: 99.0489% ( 2) 00:09:33.704 11947.717 - 11998.129: 99.0538% ( 1) 00:09:33.704 11998.129 - 12048.542: 99.0635% ( 2) 00:09:33.704 12048.542 - 12098.954: 99.0683% ( 1) 00:09:33.704 12098.954 - 12149.366: 99.0780% ( 2) 00:09:33.704 12149.366 - 12199.778: 99.0829% ( 1) 00:09:33.704 12199.778 - 12250.191: 99.0877% ( 1) 00:09:33.704 12250.191 - 12300.603: 99.0974% ( 2) 00:09:33.704 12300.603 - 12351.015: 99.1023% ( 1) 00:09:33.704 12351.015 - 12401.428: 99.1120% ( 2) 00:09:33.704 12401.428 - 12451.840: 99.1168% ( 1) 00:09:33.704 12451.840 - 12502.252: 99.1266% ( 2) 00:09:33.704 12502.252 - 12552.665: 99.1314% ( 1) 00:09:33.704 12552.665 - 12603.077: 99.1411% ( 2) 00:09:33.704 12603.077 - 12653.489: 99.1460% ( 1) 00:09:33.704 12653.489 - 12703.902: 99.1557% ( 2) 00:09:33.704 12703.902 - 12754.314: 99.1605% ( 1) 00:09:33.704 12754.314 - 12804.726: 99.1702% ( 2) 00:09:33.704 12804.726 - 12855.138: 99.1751% ( 1) 00:09:33.704 12855.138 - 12905.551: 99.1848% ( 2) 00:09:33.704 12905.551 - 13006.375: 99.1993% ( 3) 00:09:33.704 13006.375 - 13107.200: 99.2139% ( 3) 00:09:33.704 13107.200 - 13208.025: 99.2285% ( 3) 00:09:33.704 13208.025 - 13308.849: 99.2430% ( 3) 00:09:33.704 13308.849 - 13409.674: 99.2576% ( 3) 00:09:33.704 13409.674 - 13510.498: 99.2721% ( 3) 00:09:33.704 13510.498 - 13611.323: 99.2818% ( 2) 00:09:33.704 13611.323 - 13712.148: 99.2964% ( 3) 00:09:33.704 13712.148 - 13812.972: 99.3061% ( 2) 00:09:33.704 13812.972 - 13913.797: 99.3207% ( 3) 00:09:33.704 13913.797 - 14014.622: 99.3352% ( 3) 00:09:33.704 14014.622 - 14115.446: 99.3498% ( 3) 00:09:33.704 14115.446 - 14216.271: 99.3595% ( 2) 00:09:33.704 14216.271 - 14317.095: 99.3740% ( 3) 00:09:33.704 14317.095 - 14417.920: 99.3789% ( 1) 00:09:33.704 24399.557 - 24500.382: 99.3837% ( 1) 00:09:33.704 24500.382 - 24601.206: 99.3983% ( 3) 00:09:33.704 24601.206 - 24702.031: 99.4128% ( 3) 00:09:33.704 24702.031 - 24802.855: 99.4323% ( 4) 00:09:33.704 24802.855 - 24903.680: 99.4517% ( 4) 00:09:33.704 24903.680 - 25004.505: 99.4759% ( 5) 00:09:33.704 25004.505 - 25105.329: 99.4953% ( 4) 00:09:33.704 25105.329 - 25206.154: 99.5148% ( 4) 00:09:33.704 25206.154 - 25306.978: 99.5342% ( 4) 00:09:33.704 25306.978 - 25407.803: 99.5536% ( 4) 00:09:33.704 25407.803 - 25508.628: 99.5778% ( 5) 00:09:33.704 25508.628 - 25609.452: 99.5972% ( 4) 00:09:33.704 25609.452 - 25710.277: 99.6167% ( 4) 00:09:33.704 25710.277 - 25811.102: 99.6361% ( 4) 00:09:33.704 25811.102 - 26012.751: 99.6749% ( 8) 00:09:33.704 26012.751 - 26214.400: 99.7137% ( 8) 00:09:33.704 26214.400 - 26416.049: 99.7525% ( 8) 00:09:33.704 26416.049 - 26617.698: 99.7913% ( 8) 00:09:33.704 26617.698 - 26819.348: 99.8350% ( 9) 00:09:33.704 26819.348 - 27020.997: 99.8738% ( 8) 00:09:33.704 27020.997 - 27222.646: 99.9127% ( 8) 00:09:33.704 27222.646 - 27424.295: 99.9563% ( 9) 00:09:33.704 27424.295 - 27625.945: 99.9951% ( 8) 00:09:33.704 27625.945 - 27827.594: 100.0000% ( 1) 00:09:33.704 00:09:33.704 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:33.704 ============================================================================== 00:09:33.704 Range in us Cumulative IO count 00:09:33.704 5091.643 - 5116.849: 0.0825% ( 17) 00:09:33.704 5116.849 - 5142.055: 0.1941% ( 23) 00:09:33.704 5142.055 - 5167.262: 0.4707% ( 57) 00:09:33.704 5167.262 - 5192.468: 0.9899% ( 107) 00:09:33.704 5192.468 - 5217.674: 1.6838% ( 143) 00:09:33.704 5217.674 - 5242.880: 2.5912% ( 187) 00:09:33.704 5242.880 - 5268.086: 3.6588% ( 220) 00:09:33.704 5268.086 - 5293.292: 4.7409% ( 223) 00:09:33.704 5293.292 - 5318.498: 5.9491% ( 249) 00:09:33.704 5318.498 - 5343.705: 7.2884% ( 276) 00:09:33.704 5343.705 - 5368.911: 8.5695% ( 264) 00:09:33.704 5368.911 - 5394.117: 9.8942% ( 273) 00:09:33.704 5394.117 - 5419.323: 11.2966% ( 289) 00:09:33.704 5419.323 - 5444.529: 12.6650% ( 282) 00:09:33.704 5444.529 - 5469.735: 14.1062% ( 297) 00:09:33.704 5469.735 - 5494.942: 15.5085% ( 289) 00:09:33.704 5494.942 - 5520.148: 16.9449% ( 296) 00:09:33.704 5520.148 - 5545.354: 18.3715% ( 294) 00:09:33.704 5545.354 - 5570.560: 19.7156% ( 277) 00:09:33.704 5570.560 - 5595.766: 21.1423% ( 294) 00:09:33.704 5595.766 - 5620.972: 22.5738% ( 295) 00:09:33.704 5620.972 - 5646.178: 24.0344% ( 301) 00:09:33.704 5646.178 - 5671.385: 25.4464% ( 291) 00:09:33.704 5671.385 - 5696.591: 26.8439% ( 288) 00:09:33.704 5696.591 - 5721.797: 28.2803% ( 296) 00:09:33.704 5721.797 - 5747.003: 29.7021% ( 293) 00:09:33.704 5747.003 - 5772.209: 31.1335% ( 295) 00:09:33.704 5772.209 - 5797.415: 32.5796% ( 298) 00:09:33.704 5797.415 - 5822.622: 34.0159% ( 296) 00:09:33.704 5822.622 - 5847.828: 35.4280% ( 291) 00:09:33.704 5847.828 - 5873.034: 36.8449% ( 292) 00:09:33.704 5873.034 - 5898.240: 38.2521% ( 290) 00:09:33.704 5898.240 - 5923.446: 39.6642% ( 291) 00:09:33.704 5923.446 - 5948.652: 41.0860% ( 293) 00:09:33.704 5948.652 - 5973.858: 42.5223% ( 296) 00:09:33.704 5973.858 - 5999.065: 43.9247% ( 289) 00:09:33.704 5999.065 - 6024.271: 45.3659% ( 297) 00:09:33.704 6024.271 - 6049.477: 46.7925% ( 294) 00:09:33.704 6049.477 - 6074.683: 48.2143% ( 293) 00:09:33.704 6074.683 - 6099.889: 49.6409% ( 294) 00:09:33.704 6099.889 - 6125.095: 51.0675% ( 294) 00:09:33.704 6125.095 - 6150.302: 52.5039% ( 296) 00:09:33.704 6150.302 - 6175.508: 53.9111% ( 290) 00:09:33.704 6175.508 - 6200.714: 55.3426% ( 295) 00:09:33.704 6200.714 - 6225.920: 56.7741% ( 295) 00:09:33.704 6225.920 - 6251.126: 58.2104% ( 296) 00:09:33.704 6251.126 - 6276.332: 59.6225% ( 291) 00:09:33.704 6276.332 - 6301.538: 61.0394% ( 292) 00:09:33.704 6301.538 - 6326.745: 62.4709% ( 295) 00:09:33.704 6326.745 - 6351.951: 63.9072% ( 296) 00:09:33.704 6351.951 - 6377.157: 65.3969% ( 307) 00:09:33.704 6377.157 - 6402.363: 66.8236% ( 294) 00:09:33.704 6402.363 - 6427.569: 68.2259% ( 289) 00:09:33.704 6427.569 - 6452.775: 69.6623% ( 296) 00:09:33.704 6452.775 - 6503.188: 72.5010% ( 585) 00:09:33.704 6503.188 - 6553.600: 75.4173% ( 601) 00:09:33.704 6553.600 - 6604.012: 78.3385% ( 602) 00:09:33.704 6604.012 - 6654.425: 81.2451% ( 599) 00:09:33.704 6654.425 - 6704.837: 84.1275% ( 594) 00:09:33.704 6704.837 - 6755.249: 87.0439% ( 601) 00:09:33.704 6755.249 - 6805.662: 89.9165% ( 592) 00:09:33.704 6805.662 - 6856.074: 92.6000% ( 553) 00:09:33.704 6856.074 - 6906.486: 94.5943% ( 411) 00:09:33.704 6906.486 - 6956.898: 95.8511% ( 259) 00:09:33.704 6956.898 - 7007.311: 96.4722% ( 128) 00:09:33.704 7007.311 - 7057.723: 96.8362% ( 75) 00:09:33.704 7057.723 - 7108.135: 97.1176% ( 58) 00:09:33.705 7108.135 - 7158.548: 97.3360% ( 45) 00:09:33.705 7158.548 - 7208.960: 97.5058% ( 35) 00:09:33.705 7208.960 - 7259.372: 97.6271% ( 25) 00:09:33.705 7259.372 - 7309.785: 97.7145% ( 18) 00:09:33.705 7309.785 - 7360.197: 97.7873% ( 15) 00:09:33.705 7360.197 - 7410.609: 97.8406% ( 11) 00:09:33.705 7410.609 - 7461.022: 97.8892% ( 10) 00:09:33.705 7461.022 - 7511.434: 97.9183% ( 6) 00:09:33.705 7511.434 - 7561.846: 97.9571% ( 8) 00:09:33.705 7561.846 - 7612.258: 97.9911% ( 7) 00:09:33.705 7612.258 - 7662.671: 98.0153% ( 5) 00:09:33.705 7662.671 - 7713.083: 98.0444% ( 6) 00:09:33.705 7713.083 - 7763.495: 98.0687% ( 5) 00:09:33.705 7763.495 - 7813.908: 98.0978% ( 6) 00:09:33.705 7813.908 - 7864.320: 98.1221% ( 5) 00:09:33.705 7864.320 - 7914.732: 98.1464% ( 5) 00:09:33.705 7914.732 - 7965.145: 98.1755% ( 6) 00:09:33.705 7965.145 - 8015.557: 98.2046% ( 6) 00:09:33.705 8015.557 - 8065.969: 98.2288% ( 5) 00:09:33.705 8065.969 - 8116.382: 98.2580% ( 6) 00:09:33.705 8116.382 - 8166.794: 98.2871% ( 6) 00:09:33.705 8166.794 - 8217.206: 98.3162% ( 6) 00:09:33.705 8217.206 - 8267.618: 98.3453% ( 6) 00:09:33.705 8267.618 - 8318.031: 98.3793% ( 7) 00:09:33.705 8318.031 - 8368.443: 98.4132% ( 7) 00:09:33.705 8368.443 - 8418.855: 98.4424% ( 6) 00:09:33.705 8418.855 - 8469.268: 98.4569% ( 3) 00:09:33.705 8469.268 - 8519.680: 98.4715% ( 3) 00:09:33.705 8519.680 - 8570.092: 98.4860% ( 3) 00:09:33.705 8570.092 - 8620.505: 98.5006% ( 3) 00:09:33.705 8620.505 - 8670.917: 98.5151% ( 3) 00:09:33.705 8670.917 - 8721.329: 98.5297% ( 3) 00:09:33.705 8721.329 - 8771.742: 98.5443% ( 3) 00:09:33.705 8771.742 - 8822.154: 98.5540% ( 2) 00:09:33.705 8822.154 - 8872.566: 98.5734% ( 4) 00:09:33.705 8872.566 - 8922.978: 98.5879% ( 3) 00:09:33.705 8922.978 - 8973.391: 98.6025% ( 3) 00:09:33.705 8973.391 - 9023.803: 98.6170% ( 3) 00:09:33.705 9023.803 - 9074.215: 98.6316% ( 3) 00:09:33.705 9074.215 - 9124.628: 98.6462% ( 3) 00:09:33.705 9124.628 - 9175.040: 98.6607% ( 3) 00:09:33.705 9175.040 - 9225.452: 98.6753% ( 3) 00:09:33.705 9225.452 - 9275.865: 98.6898% ( 3) 00:09:33.705 9275.865 - 9326.277: 98.7044% ( 3) 00:09:33.705 9326.277 - 9376.689: 98.7189% ( 3) 00:09:33.705 9376.689 - 9427.102: 98.7335% ( 3) 00:09:33.705 9427.102 - 9477.514: 98.7481% ( 3) 00:09:33.705 9477.514 - 9527.926: 98.7578% ( 2) 00:09:33.705 10132.874 - 10183.286: 98.7675% ( 2) 00:09:33.705 10183.286 - 10233.698: 98.7723% ( 1) 00:09:33.705 10233.698 - 10284.111: 98.7820% ( 2) 00:09:33.705 10284.111 - 10334.523: 98.7917% ( 2) 00:09:33.705 10334.523 - 10384.935: 98.8014% ( 2) 00:09:33.705 10384.935 - 10435.348: 98.8111% ( 2) 00:09:33.705 10435.348 - 10485.760: 98.8208% ( 2) 00:09:33.705 10485.760 - 10536.172: 98.8306% ( 2) 00:09:33.705 10536.172 - 10586.585: 98.8403% ( 2) 00:09:33.705 10586.585 - 10636.997: 98.8500% ( 2) 00:09:33.705 10636.997 - 10687.409: 98.8548% ( 1) 00:09:33.705 10687.409 - 10737.822: 98.8645% ( 2) 00:09:33.705 10737.822 - 10788.234: 98.8742% ( 2) 00:09:33.705 10788.234 - 10838.646: 98.8839% ( 2) 00:09:33.705 10838.646 - 10889.058: 98.8936% ( 2) 00:09:33.705 10889.058 - 10939.471: 98.9033% ( 2) 00:09:33.705 10939.471 - 10989.883: 98.9130% ( 2) 00:09:33.705 10989.883 - 11040.295: 98.9227% ( 2) 00:09:33.705 11040.295 - 11090.708: 98.9325% ( 2) 00:09:33.705 11090.708 - 11141.120: 98.9422% ( 2) 00:09:33.705 11141.120 - 11191.532: 98.9519% ( 2) 00:09:33.705 11191.532 - 11241.945: 98.9616% ( 2) 00:09:33.705 11241.945 - 11292.357: 98.9713% ( 2) 00:09:33.705 11292.357 - 11342.769: 98.9761% ( 1) 00:09:33.705 11342.769 - 11393.182: 98.9858% ( 2) 00:09:33.705 11393.182 - 11443.594: 98.9907% ( 1) 00:09:33.705 11443.594 - 11494.006: 99.0004% ( 2) 00:09:33.705 11494.006 - 11544.418: 99.0101% ( 2) 00:09:33.705 11544.418 - 11594.831: 99.0198% ( 2) 00:09:33.705 11594.831 - 11645.243: 99.0295% ( 2) 00:09:33.705 11645.243 - 11695.655: 99.0344% ( 1) 00:09:33.705 11695.655 - 11746.068: 99.0441% ( 2) 00:09:33.705 11746.068 - 11796.480: 99.0538% ( 2) 00:09:33.705 11796.480 - 11846.892: 99.0635% ( 2) 00:09:33.705 11846.892 - 11897.305: 99.0732% ( 2) 00:09:33.705 11897.305 - 11947.717: 99.0829% ( 2) 00:09:33.705 11947.717 - 11998.129: 99.0926% ( 2) 00:09:33.705 11998.129 - 12048.542: 99.0974% ( 1) 00:09:33.705 12048.542 - 12098.954: 99.1071% ( 2) 00:09:33.705 12098.954 - 12149.366: 99.1168% ( 2) 00:09:33.705 12149.366 - 12199.778: 99.1266% ( 2) 00:09:33.705 12199.778 - 12250.191: 99.1363% ( 2) 00:09:33.705 12250.191 - 12300.603: 99.1460% ( 2) 00:09:33.705 12300.603 - 12351.015: 99.1557% ( 2) 00:09:33.705 12351.015 - 12401.428: 99.1605% ( 1) 00:09:33.705 12401.428 - 12451.840: 99.1702% ( 2) 00:09:33.705 12451.840 - 12502.252: 99.1799% ( 2) 00:09:33.705 12502.252 - 12552.665: 99.1848% ( 1) 00:09:33.705 12552.665 - 12603.077: 99.1896% ( 1) 00:09:33.705 12603.077 - 12653.489: 99.1945% ( 1) 00:09:33.705 12653.489 - 12703.902: 99.2042% ( 2) 00:09:33.705 12703.902 - 12754.314: 99.2139% ( 2) 00:09:33.705 12754.314 - 12804.726: 99.2236% ( 2) 00:09:33.705 12804.726 - 12855.138: 99.2285% ( 1) 00:09:33.705 12855.138 - 12905.551: 99.2382% ( 2) 00:09:33.705 12905.551 - 13006.375: 99.2479% ( 2) 00:09:33.705 13006.375 - 13107.200: 99.2624% ( 3) 00:09:33.705 13107.200 - 13208.025: 99.2770% ( 3) 00:09:33.705 13208.025 - 13308.849: 99.2964% ( 4) 00:09:33.705 13308.849 - 13409.674: 99.3109% ( 3) 00:09:33.705 13409.674 - 13510.498: 99.3304% ( 4) 00:09:33.705 13510.498 - 13611.323: 99.3449% ( 3) 00:09:33.705 13611.323 - 13712.148: 99.3643% ( 4) 00:09:33.705 13712.148 - 13812.972: 99.3789% ( 3) 00:09:33.705 23189.662 - 23290.486: 99.3837% ( 1) 00:09:33.705 23290.486 - 23391.311: 99.4080% ( 5) 00:09:33.705 23391.311 - 23492.135: 99.4274% ( 4) 00:09:33.705 23492.135 - 23592.960: 99.4468% ( 4) 00:09:33.705 23592.960 - 23693.785: 99.4662% ( 4) 00:09:33.705 23693.785 - 23794.609: 99.4856% ( 4) 00:09:33.705 23794.609 - 23895.434: 99.5050% ( 4) 00:09:33.705 23895.434 - 23996.258: 99.5245% ( 4) 00:09:33.705 23996.258 - 24097.083: 99.5439% ( 4) 00:09:33.705 24097.083 - 24197.908: 99.5681% ( 5) 00:09:33.705 24197.908 - 24298.732: 99.5875% ( 4) 00:09:33.705 24298.732 - 24399.557: 99.6069% ( 4) 00:09:33.705 24399.557 - 24500.382: 99.6312% ( 5) 00:09:33.705 24500.382 - 24601.206: 99.6506% ( 4) 00:09:33.705 24601.206 - 24702.031: 99.6700% ( 4) 00:09:33.705 24702.031 - 24802.855: 99.6894% ( 4) 00:09:33.705 24802.855 - 24903.680: 99.7137% ( 5) 00:09:33.705 24903.680 - 25004.505: 99.7283% ( 3) 00:09:33.705 25004.505 - 25105.329: 99.7477% ( 4) 00:09:33.705 25105.329 - 25206.154: 99.7622% ( 3) 00:09:33.705 25206.154 - 25306.978: 99.7816% ( 4) 00:09:33.705 25306.978 - 25407.803: 99.8010% ( 4) 00:09:33.705 25407.803 - 25508.628: 99.8205% ( 4) 00:09:33.705 25508.628 - 25609.452: 99.8447% ( 5) 00:09:33.705 25609.452 - 25710.277: 99.8641% ( 4) 00:09:33.705 25710.277 - 25811.102: 99.8835% ( 4) 00:09:33.705 25811.102 - 26012.751: 99.9272% ( 9) 00:09:33.705 26012.751 - 26214.400: 99.9660% ( 8) 00:09:33.705 26214.400 - 26416.049: 100.0000% ( 7) 00:09:33.705 00:09:33.705 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:33.705 ============================================================================== 00:09:33.705 Range in us Cumulative IO count 00:09:33.705 5041.231 - 5066.437: 0.0048% ( 1) 00:09:33.705 5066.437 - 5091.643: 0.0241% ( 4) 00:09:33.705 5091.643 - 5116.849: 0.0772% ( 11) 00:09:33.705 5116.849 - 5142.055: 0.1929% ( 24) 00:09:33.705 5142.055 - 5167.262: 0.4099% ( 45) 00:09:33.705 5167.262 - 5192.468: 0.7764% ( 76) 00:09:33.705 5192.468 - 5217.674: 1.5046% ( 151) 00:09:33.705 5217.674 - 5242.880: 2.4547% ( 197) 00:09:33.705 5242.880 - 5268.086: 3.5060% ( 218) 00:09:33.705 5268.086 - 5293.292: 4.6682% ( 241) 00:09:33.705 5293.292 - 5318.498: 5.8208% ( 239) 00:09:33.705 5318.498 - 5343.705: 7.1084% ( 267) 00:09:33.705 5343.705 - 5368.911: 8.4443% ( 277) 00:09:33.705 5368.911 - 5394.117: 9.7608% ( 273) 00:09:33.705 5394.117 - 5419.323: 11.1111% ( 280) 00:09:33.705 5419.323 - 5444.529: 12.4759% ( 283) 00:09:33.705 5444.529 - 5469.735: 13.8648% ( 288) 00:09:33.705 5469.735 - 5494.942: 15.2633% ( 290) 00:09:33.705 5494.942 - 5520.148: 16.7245% ( 303) 00:09:33.705 5520.148 - 5545.354: 18.2099% ( 308) 00:09:33.705 5545.354 - 5570.560: 19.5747% ( 283) 00:09:33.705 5570.560 - 5595.766: 20.9587% ( 287) 00:09:33.705 5595.766 - 5620.972: 22.3862% ( 296) 00:09:33.705 5620.972 - 5646.178: 23.8667% ( 307) 00:09:33.705 5646.178 - 5671.385: 25.2894% ( 295) 00:09:33.705 5671.385 - 5696.591: 26.6975% ( 292) 00:09:33.705 5696.591 - 5721.797: 28.1684% ( 305) 00:09:33.705 5721.797 - 5747.003: 29.5718% ( 291) 00:09:33.705 5747.003 - 5772.209: 30.9848% ( 293) 00:09:33.705 5772.209 - 5797.415: 32.3881% ( 291) 00:09:33.705 5797.415 - 5822.622: 33.7818% ( 289) 00:09:33.705 5822.622 - 5847.828: 35.1900% ( 292) 00:09:33.705 5847.828 - 5873.034: 36.6464% ( 302) 00:09:33.705 5873.034 - 5898.240: 38.0449% ( 290) 00:09:33.705 5898.240 - 5923.446: 39.4821% ( 298) 00:09:33.705 5923.446 - 5948.652: 40.8999% ( 294) 00:09:33.705 5948.652 - 5973.858: 42.3177% ( 294) 00:09:33.705 5973.858 - 5999.065: 43.7500% ( 297) 00:09:33.705 5999.065 - 6024.271: 45.1534% ( 291) 00:09:33.705 6024.271 - 6049.477: 46.5664% ( 293) 00:09:33.705 6049.477 - 6074.683: 47.9986% ( 297) 00:09:33.705 6074.683 - 6099.889: 49.4309% ( 297) 00:09:33.705 6099.889 - 6125.095: 50.8343% ( 291) 00:09:33.705 6125.095 - 6150.302: 52.2666% ( 297) 00:09:33.705 6150.302 - 6175.508: 53.6941% ( 296) 00:09:33.705 6175.508 - 6200.714: 55.1215% ( 296) 00:09:33.705 6200.714 - 6225.920: 56.5828% ( 303) 00:09:33.705 6225.920 - 6251.126: 57.9765% ( 289) 00:09:33.705 6251.126 - 6276.332: 59.3798% ( 291) 00:09:33.705 6276.332 - 6301.538: 60.8362% ( 302) 00:09:33.705 6301.538 - 6326.745: 62.2589% ( 295) 00:09:33.706 6326.745 - 6351.951: 63.7249% ( 304) 00:09:33.706 6351.951 - 6377.157: 65.2054% ( 307) 00:09:33.706 6377.157 - 6402.363: 66.6329% ( 296) 00:09:33.706 6402.363 - 6427.569: 68.0652% ( 297) 00:09:33.706 6427.569 - 6452.775: 69.5023% ( 298) 00:09:33.706 6452.775 - 6503.188: 72.4007% ( 601) 00:09:33.706 6503.188 - 6553.600: 75.3086% ( 603) 00:09:33.706 6553.600 - 6604.012: 78.2166% ( 603) 00:09:33.706 6604.012 - 6654.425: 81.1005% ( 598) 00:09:33.706 6654.425 - 6704.837: 83.9892% ( 599) 00:09:33.706 6704.837 - 6755.249: 86.9117% ( 606) 00:09:33.706 6755.249 - 6805.662: 89.7666% ( 592) 00:09:33.706 6805.662 - 6856.074: 92.4093% ( 548) 00:09:33.706 6856.074 - 6906.486: 94.3818% ( 409) 00:09:33.706 6906.486 - 6956.898: 95.5874% ( 250) 00:09:33.706 6956.898 - 7007.311: 96.2722% ( 142) 00:09:33.706 7007.311 - 7057.723: 96.6387% ( 76) 00:09:33.706 7057.723 - 7108.135: 96.9232% ( 59) 00:09:33.706 7108.135 - 7158.548: 97.1306% ( 43) 00:09:33.706 7158.548 - 7208.960: 97.2946% ( 34) 00:09:33.706 7208.960 - 7259.372: 97.4392% ( 30) 00:09:33.706 7259.372 - 7309.785: 97.5453% ( 22) 00:09:33.706 7309.785 - 7360.197: 97.6128% ( 14) 00:09:33.706 7360.197 - 7410.609: 97.6611% ( 10) 00:09:33.706 7410.609 - 7461.022: 97.7189% ( 12) 00:09:33.706 7461.022 - 7511.434: 97.7720% ( 11) 00:09:33.706 7511.434 - 7561.846: 97.8057% ( 7) 00:09:33.706 7561.846 - 7612.258: 97.8443% ( 8) 00:09:33.706 7612.258 - 7662.671: 97.8781% ( 7) 00:09:33.706 7662.671 - 7713.083: 97.8926% ( 3) 00:09:33.706 7713.083 - 7763.495: 97.9070% ( 3) 00:09:33.706 7763.495 - 7813.908: 97.9263% ( 4) 00:09:33.706 7813.908 - 7864.320: 97.9552% ( 6) 00:09:33.706 7864.320 - 7914.732: 97.9938% ( 8) 00:09:33.706 7914.732 - 7965.145: 98.0324% ( 8) 00:09:33.706 7965.145 - 8015.557: 98.0613% ( 6) 00:09:33.706 8015.557 - 8065.969: 98.0806% ( 4) 00:09:33.706 8065.969 - 8116.382: 98.1144% ( 7) 00:09:33.706 8116.382 - 8166.794: 98.1626% ( 10) 00:09:33.706 8166.794 - 8217.206: 98.1916% ( 6) 00:09:33.706 8217.206 - 8267.618: 98.2157% ( 5) 00:09:33.706 8267.618 - 8318.031: 98.2494% ( 7) 00:09:33.706 8318.031 - 8368.443: 98.2784% ( 6) 00:09:33.706 8368.443 - 8418.855: 98.3121% ( 7) 00:09:33.706 8418.855 - 8469.268: 98.3362% ( 5) 00:09:33.706 8469.268 - 8519.680: 98.3507% ( 3) 00:09:33.706 8519.680 - 8570.092: 98.3652% ( 3) 00:09:33.706 8570.092 - 8620.505: 98.3796% ( 3) 00:09:33.706 8620.505 - 8670.917: 98.3941% ( 3) 00:09:33.706 8670.917 - 8721.329: 98.4086% ( 3) 00:09:33.706 8721.329 - 8771.742: 98.4230% ( 3) 00:09:33.706 8771.742 - 8822.154: 98.4375% ( 3) 00:09:33.706 8822.154 - 8872.566: 98.4520% ( 3) 00:09:33.706 8872.566 - 8922.978: 98.4664% ( 3) 00:09:33.706 8922.978 - 8973.391: 98.4809% ( 3) 00:09:33.706 8973.391 - 9023.803: 98.4954% ( 3) 00:09:33.706 9023.803 - 9074.215: 98.5098% ( 3) 00:09:33.706 9074.215 - 9124.628: 98.5243% ( 3) 00:09:33.706 9124.628 - 9175.040: 98.5388% ( 3) 00:09:33.706 9175.040 - 9225.452: 98.5484% ( 2) 00:09:33.706 9225.452 - 9275.865: 98.5629% ( 3) 00:09:33.706 9275.865 - 9326.277: 98.5774% ( 3) 00:09:33.706 9326.277 - 9376.689: 98.5918% ( 3) 00:09:33.706 9376.689 - 9427.102: 98.6063% ( 3) 00:09:33.706 9427.102 - 9477.514: 98.6208% ( 3) 00:09:33.706 9477.514 - 9527.926: 98.6352% ( 3) 00:09:33.706 9527.926 - 9578.338: 98.6497% ( 3) 00:09:33.706 9578.338 - 9628.751: 98.6642% ( 3) 00:09:33.706 9628.751 - 9679.163: 98.6786% ( 3) 00:09:33.706 9679.163 - 9729.575: 98.6931% ( 3) 00:09:33.706 9729.575 - 9779.988: 98.7076% ( 3) 00:09:33.706 9779.988 - 9830.400: 98.7220% ( 3) 00:09:33.706 9830.400 - 9880.812: 98.7365% ( 3) 00:09:33.706 9880.812 - 9931.225: 98.7510% ( 3) 00:09:33.706 9931.225 - 9981.637: 98.7654% ( 3) 00:09:33.706 10636.997 - 10687.409: 98.7799% ( 3) 00:09:33.706 10687.409 - 10737.822: 98.7944% ( 3) 00:09:33.706 10737.822 - 10788.234: 98.8040% ( 2) 00:09:33.706 10788.234 - 10838.646: 98.8137% ( 2) 00:09:33.706 10838.646 - 10889.058: 98.8281% ( 3) 00:09:33.706 10889.058 - 10939.471: 98.8378% ( 2) 00:09:33.706 10939.471 - 10989.883: 98.8474% ( 2) 00:09:33.706 10989.883 - 11040.295: 98.8619% ( 3) 00:09:33.706 11040.295 - 11090.708: 98.8715% ( 2) 00:09:33.706 11090.708 - 11141.120: 98.8812% ( 2) 00:09:33.706 11141.120 - 11191.532: 98.8908% ( 2) 00:09:33.706 11191.532 - 11241.945: 98.9053% ( 3) 00:09:33.706 11241.945 - 11292.357: 98.9149% ( 2) 00:09:33.706 11292.357 - 11342.769: 98.9246% ( 2) 00:09:33.706 11342.769 - 11393.182: 98.9390% ( 3) 00:09:33.706 11393.182 - 11443.594: 98.9487% ( 2) 00:09:33.706 11443.594 - 11494.006: 98.9632% ( 3) 00:09:33.706 11494.006 - 11544.418: 98.9728% ( 2) 00:09:33.706 11544.418 - 11594.831: 98.9873% ( 3) 00:09:33.706 11594.831 - 11645.243: 98.9969% ( 2) 00:09:33.706 11645.243 - 11695.655: 99.0066% ( 2) 00:09:33.706 11695.655 - 11746.068: 99.0210% ( 3) 00:09:33.706 11746.068 - 11796.480: 99.0307% ( 2) 00:09:33.706 11796.480 - 11846.892: 99.0451% ( 3) 00:09:33.706 11846.892 - 11897.305: 99.0548% ( 2) 00:09:33.706 11897.305 - 11947.717: 99.0644% ( 2) 00:09:33.706 11947.717 - 11998.129: 99.0789% ( 3) 00:09:33.706 11998.129 - 12048.542: 99.0885% ( 2) 00:09:33.706 12048.542 - 12098.954: 99.1030% ( 3) 00:09:33.706 12098.954 - 12149.366: 99.1127% ( 2) 00:09:33.706 12149.366 - 12199.778: 99.1271% ( 3) 00:09:33.706 12199.778 - 12250.191: 99.1368% ( 2) 00:09:33.706 12250.191 - 12300.603: 99.1464% ( 2) 00:09:33.706 12300.603 - 12351.015: 99.1561% ( 2) 00:09:33.706 12351.015 - 12401.428: 99.1657% ( 2) 00:09:33.706 12401.428 - 12451.840: 99.1802% ( 3) 00:09:33.706 12451.840 - 12502.252: 99.1898% ( 2) 00:09:33.706 12502.252 - 12552.665: 99.2043% ( 3) 00:09:33.706 12552.665 - 12603.077: 99.2139% ( 2) 00:09:33.706 12603.077 - 12653.489: 99.2284% ( 3) 00:09:33.706 12653.489 - 12703.902: 99.2380% ( 2) 00:09:33.706 12703.902 - 12754.314: 99.2525% ( 3) 00:09:33.706 12754.314 - 12804.726: 99.2573% ( 1) 00:09:33.706 12804.726 - 12855.138: 99.2718% ( 3) 00:09:33.706 12855.138 - 12905.551: 99.2814% ( 2) 00:09:33.706 12905.551 - 13006.375: 99.3056% ( 5) 00:09:33.706 13006.375 - 13107.200: 99.3297% ( 5) 00:09:33.706 13107.200 - 13208.025: 99.3538% ( 5) 00:09:33.706 13208.025 - 13308.849: 99.3779% ( 5) 00:09:33.706 13308.849 - 13409.674: 99.3827% ( 1) 00:09:33.706 15123.692 - 15224.517: 99.3924% ( 2) 00:09:33.706 15224.517 - 15325.342: 99.4117% ( 4) 00:09:33.706 15325.342 - 15426.166: 99.4309% ( 4) 00:09:33.706 15426.166 - 15526.991: 99.4551% ( 5) 00:09:33.706 15526.991 - 15627.815: 99.4743% ( 4) 00:09:33.706 15627.815 - 15728.640: 99.4936% ( 4) 00:09:33.706 15728.640 - 15829.465: 99.5129% ( 4) 00:09:33.706 15829.465 - 15930.289: 99.5322% ( 4) 00:09:33.706 15930.289 - 16031.114: 99.5515% ( 4) 00:09:33.706 16031.114 - 16131.938: 99.5708% ( 4) 00:09:33.706 16131.938 - 16232.763: 99.5901% ( 4) 00:09:33.706 16232.763 - 16333.588: 99.6142% ( 5) 00:09:33.706 16333.588 - 16434.412: 99.6335% ( 4) 00:09:33.706 16434.412 - 16535.237: 99.6528% ( 4) 00:09:33.706 16535.237 - 16636.062: 99.6721% ( 4) 00:09:33.706 16636.062 - 16736.886: 99.6914% ( 4) 00:09:33.706 16736.886 - 16837.711: 99.7155% ( 5) 00:09:33.706 16837.711 - 16938.535: 99.7348% ( 4) 00:09:33.706 16938.535 - 17039.360: 99.7541% ( 4) 00:09:33.706 17039.360 - 17140.185: 99.7733% ( 4) 00:09:33.706 17140.185 - 17241.009: 99.7926% ( 4) 00:09:33.706 17241.009 - 17341.834: 99.8119% ( 4) 00:09:33.706 17341.834 - 17442.658: 99.8360% ( 5) 00:09:33.706 17442.658 - 17543.483: 99.8553% ( 4) 00:09:33.706 17543.483 - 17644.308: 99.8746% ( 4) 00:09:33.706 17644.308 - 17745.132: 99.8939% ( 4) 00:09:33.706 17745.132 - 17845.957: 99.9180% ( 5) 00:09:33.706 17845.957 - 17946.782: 99.9373% ( 4) 00:09:33.706 17946.782 - 18047.606: 99.9566% ( 4) 00:09:33.706 18047.606 - 18148.431: 99.9759% ( 4) 00:09:33.706 18148.431 - 18249.255: 100.0000% ( 5) 00:09:33.706 00:09:33.706 09:47:22 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:35.079 Initializing NVMe Controllers 00:09:35.079 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:35.079 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.079 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:35.079 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:35.079 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:35.079 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:35.079 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:35.079 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:35.079 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:35.079 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:35.079 Initialization complete. Launching workers. 00:09:35.079 ======================================================== 00:09:35.079 Latency(us) 00:09:35.079 Device Information : IOPS MiB/s Average min max 00:09:35.079 PCIE (0000:00:09.0) NSID 1 from core 0: 19670.77 230.52 6504.43 5060.37 27777.48 00:09:35.079 PCIE (0000:00:06.0) NSID 1 from core 0: 19670.77 230.52 6498.54 4861.89 27465.12 00:09:35.079 PCIE (0000:00:07.0) NSID 1 from core 0: 19670.77 230.52 6492.43 5118.14 26495.41 00:09:35.079 PCIE (0000:00:08.0) NSID 1 from core 0: 19670.77 230.52 6486.69 5054.90 25814.28 00:09:35.079 PCIE (0000:00:08.0) NSID 2 from core 0: 19670.77 230.52 6480.94 5089.73 24805.46 00:09:35.079 PCIE (0000:00:08.0) NSID 3 from core 0: 19670.77 230.52 6475.05 5181.99 23469.99 00:09:35.079 ======================================================== 00:09:35.079 Total : 118024.62 1383.10 6489.68 4861.89 27777.48 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5520.148us 00:09:35.079 10.00000% : 5797.415us 00:09:35.079 25.00000% : 6024.271us 00:09:35.079 50.00000% : 6276.332us 00:09:35.079 75.00000% : 6654.425us 00:09:35.079 90.00000% : 7057.723us 00:09:35.079 95.00000% : 7410.609us 00:09:35.079 98.00000% : 8771.742us 00:09:35.079 99.00000% : 9779.988us 00:09:35.079 99.50000% : 24601.206us 00:09:35.079 99.90000% : 27222.646us 00:09:35.079 99.99000% : 27827.594us 00:09:35.079 99.99900% : 27827.594us 00:09:35.079 99.99990% : 27827.594us 00:09:35.079 99.99999% : 27827.594us 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5318.498us 00:09:35.079 10.00000% : 5595.766us 00:09:35.079 25.00000% : 5822.622us 00:09:35.079 50.00000% : 6301.538us 00:09:35.079 75.00000% : 6856.074us 00:09:35.079 90.00000% : 7259.372us 00:09:35.079 95.00000% : 7612.258us 00:09:35.079 98.00000% : 8469.268us 00:09:35.079 99.00000% : 9981.637us 00:09:35.079 99.50000% : 24500.382us 00:09:35.079 99.90000% : 27020.997us 00:09:35.079 99.99000% : 27424.295us 00:09:35.079 99.99900% : 27625.945us 00:09:35.079 99.99990% : 27625.945us 00:09:35.079 99.99999% : 27625.945us 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5545.354us 00:09:35.079 10.00000% : 5847.828us 00:09:35.079 25.00000% : 6024.271us 00:09:35.079 50.00000% : 6276.332us 00:09:35.079 75.00000% : 6604.012us 00:09:35.079 90.00000% : 7057.723us 00:09:35.079 95.00000% : 7561.846us 00:09:35.079 98.00000% : 8469.268us 00:09:35.079 99.00000% : 9729.575us 00:09:35.079 99.50000% : 23794.609us 00:09:35.079 99.90000% : 26012.751us 00:09:35.079 99.99000% : 26617.698us 00:09:35.079 99.99900% : 26617.698us 00:09:35.079 99.99990% : 26617.698us 00:09:35.079 99.99999% : 26617.698us 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5520.148us 00:09:35.079 10.00000% : 5822.622us 00:09:35.079 25.00000% : 6024.271us 00:09:35.079 50.00000% : 6276.332us 00:09:35.079 75.00000% : 6654.425us 00:09:35.079 90.00000% : 7057.723us 00:09:35.079 95.00000% : 7461.022us 00:09:35.079 98.00000% : 8469.268us 00:09:35.079 99.00000% : 9628.751us 00:09:35.079 99.50000% : 23492.135us 00:09:35.079 99.90000% : 25206.154us 00:09:35.079 99.99000% : 25811.102us 00:09:35.079 99.99900% : 26012.751us 00:09:35.079 99.99990% : 26012.751us 00:09:35.079 99.99999% : 26012.751us 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5545.354us 00:09:35.079 10.00000% : 5822.622us 00:09:35.079 25.00000% : 6024.271us 00:09:35.079 50.00000% : 6276.332us 00:09:35.079 75.00000% : 6604.012us 00:09:35.079 90.00000% : 7007.311us 00:09:35.079 95.00000% : 7612.258us 00:09:35.079 98.00000% : 8267.618us 00:09:35.079 99.00000% : 9779.988us 00:09:35.079 99.50000% : 22584.714us 00:09:35.079 99.90000% : 24298.732us 00:09:35.079 99.99000% : 24802.855us 00:09:35.079 99.99900% : 24903.680us 00:09:35.079 99.99990% : 24903.680us 00:09:35.079 99.99999% : 24903.680us 00:09:35.079 00:09:35.079 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:35.079 ================================================================================= 00:09:35.079 1.00000% : 5520.148us 00:09:35.079 10.00000% : 5822.622us 00:09:35.079 25.00000% : 6024.271us 00:09:35.079 50.00000% : 6276.332us 00:09:35.079 75.00000% : 6654.425us 00:09:35.079 90.00000% : 7057.723us 00:09:35.079 95.00000% : 7561.846us 00:09:35.079 98.00000% : 8267.618us 00:09:35.079 99.00000% : 9779.988us 00:09:35.079 99.50000% : 22181.415us 00:09:35.079 99.90000% : 22988.012us 00:09:35.079 99.99000% : 23492.135us 00:09:35.079 99.99900% : 23492.135us 00:09:35.080 99.99990% : 23492.135us 00:09:35.080 99.99999% : 23492.135us 00:09:35.080 00:09:35.080 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:35.080 ============================================================================== 00:09:35.080 Range in us Cumulative IO count 00:09:35.080 5041.231 - 5066.437: 0.0051% ( 1) 00:09:35.080 5116.849 - 5142.055: 0.0101% ( 1) 00:09:35.080 5142.055 - 5167.262: 0.0152% ( 1) 00:09:35.080 5167.262 - 5192.468: 0.0355% ( 4) 00:09:35.080 5192.468 - 5217.674: 0.0406% ( 1) 00:09:35.080 5217.674 - 5242.880: 0.0558% ( 3) 00:09:35.080 5242.880 - 5268.086: 0.0659% ( 2) 00:09:35.080 5268.086 - 5293.292: 0.1116% ( 9) 00:09:35.080 5293.292 - 5318.498: 0.1471% ( 7) 00:09:35.080 5318.498 - 5343.705: 0.2131% ( 13) 00:09:35.080 5343.705 - 5368.911: 0.2486% ( 7) 00:09:35.080 5368.911 - 5394.117: 0.2993% ( 10) 00:09:35.080 5394.117 - 5419.323: 0.3551% ( 11) 00:09:35.080 5419.323 - 5444.529: 0.5225% ( 33) 00:09:35.080 5444.529 - 5469.735: 0.7153% ( 38) 00:09:35.080 5469.735 - 5494.942: 0.9436% ( 45) 00:09:35.080 5494.942 - 5520.148: 1.2530% ( 61) 00:09:35.080 5520.148 - 5545.354: 1.5929% ( 67) 00:09:35.080 5545.354 - 5570.560: 2.0089% ( 82) 00:09:35.080 5570.560 - 5595.766: 2.4401% ( 85) 00:09:35.080 5595.766 - 5620.972: 2.9830% ( 107) 00:09:35.080 5620.972 - 5646.178: 3.6678% ( 135) 00:09:35.080 5646.178 - 5671.385: 4.4186% ( 148) 00:09:35.080 5671.385 - 5696.591: 5.2861% ( 171) 00:09:35.080 5696.591 - 5721.797: 6.2703% ( 194) 00:09:35.080 5721.797 - 5747.003: 7.4777% ( 238) 00:09:35.080 5747.003 - 5772.209: 8.6901% ( 239) 00:09:35.080 5772.209 - 5797.415: 10.0802% ( 274) 00:09:35.080 5797.415 - 5822.622: 11.5970% ( 299) 00:09:35.080 5822.622 - 5847.828: 13.2660% ( 329) 00:09:35.080 5847.828 - 5873.034: 15.0315% ( 348) 00:09:35.080 5873.034 - 5898.240: 17.0302% ( 394) 00:09:35.080 5898.240 - 5923.446: 19.1254% ( 413) 00:09:35.080 5923.446 - 5948.652: 21.0836% ( 386) 00:09:35.080 5948.652 - 5973.858: 23.0114% ( 380) 00:09:35.080 5973.858 - 5999.065: 24.9696% ( 386) 00:09:35.080 5999.065 - 6024.271: 27.1662% ( 433) 00:09:35.080 6024.271 - 6049.477: 29.4592% ( 452) 00:09:35.080 6049.477 - 6074.683: 31.8689% ( 475) 00:09:35.080 6074.683 - 6099.889: 33.8119% ( 383) 00:09:35.080 6099.889 - 6125.095: 35.9781% ( 427) 00:09:35.080 6125.095 - 6150.302: 38.1950% ( 437) 00:09:35.080 6150.302 - 6175.508: 40.7823% ( 510) 00:09:35.080 6175.508 - 6200.714: 43.6638% ( 568) 00:09:35.080 6200.714 - 6225.920: 46.1496% ( 490) 00:09:35.080 6225.920 - 6251.126: 48.9448% ( 551) 00:09:35.080 6251.126 - 6276.332: 51.1262% ( 430) 00:09:35.080 6276.332 - 6301.538: 53.6983% ( 507) 00:09:35.080 6301.538 - 6326.745: 56.4732% ( 547) 00:09:35.080 6326.745 - 6351.951: 58.5177% ( 403) 00:09:35.080 6351.951 - 6377.157: 60.1968% ( 331) 00:09:35.080 6377.157 - 6402.363: 61.7340% ( 303) 00:09:35.080 6402.363 - 6427.569: 63.5095% ( 350) 00:09:35.080 6427.569 - 6452.775: 65.0974% ( 313) 00:09:35.080 6452.775 - 6503.188: 68.0753% ( 587) 00:09:35.080 6503.188 - 6553.600: 70.7995% ( 537) 00:09:35.080 6553.600 - 6604.012: 73.5745% ( 547) 00:09:35.080 6604.012 - 6654.425: 76.0806% ( 494) 00:09:35.080 6654.425 - 6704.837: 78.2468% ( 427) 00:09:35.080 6704.837 - 6755.249: 80.3267% ( 410) 00:09:35.080 6755.249 - 6805.662: 82.3407% ( 397) 00:09:35.080 6805.662 - 6856.074: 84.2482% ( 376) 00:09:35.080 6856.074 - 6906.486: 86.1861% ( 382) 00:09:35.080 6906.486 - 6956.898: 87.7993% ( 318) 00:09:35.080 6956.898 - 7007.311: 89.0828% ( 253) 00:09:35.080 7007.311 - 7057.723: 90.2953% ( 239) 00:09:35.080 7057.723 - 7108.135: 91.1881% ( 176) 00:09:35.080 7108.135 - 7158.548: 91.9338% ( 147) 00:09:35.080 7158.548 - 7208.960: 92.6897% ( 149) 00:09:35.080 7208.960 - 7259.372: 93.3644% ( 133) 00:09:35.080 7259.372 - 7309.785: 94.0747% ( 140) 00:09:35.080 7309.785 - 7360.197: 94.5211% ( 88) 00:09:35.080 7360.197 - 7410.609: 95.0436% ( 103) 00:09:35.080 7410.609 - 7461.022: 95.3987% ( 70) 00:09:35.080 7461.022 - 7511.434: 95.6625% ( 52) 00:09:35.080 7511.434 - 7561.846: 95.9669% ( 60) 00:09:35.080 7561.846 - 7612.258: 96.1648% ( 39) 00:09:35.080 7612.258 - 7662.671: 96.3728% ( 41) 00:09:35.080 7662.671 - 7713.083: 96.5503% ( 35) 00:09:35.080 7713.083 - 7763.495: 96.7076% ( 31) 00:09:35.080 7763.495 - 7813.908: 96.8395% ( 26) 00:09:35.080 7813.908 - 7864.320: 96.9663% ( 25) 00:09:35.080 7864.320 - 7914.732: 97.0728% ( 21) 00:09:35.080 7914.732 - 7965.145: 97.1895% ( 23) 00:09:35.080 7965.145 - 8015.557: 97.2656% ( 15) 00:09:35.080 8015.557 - 8065.969: 97.3569% ( 18) 00:09:35.080 8065.969 - 8116.382: 97.4584% ( 20) 00:09:35.080 8116.382 - 8166.794: 97.5244% ( 13) 00:09:35.080 8166.794 - 8217.206: 97.6055% ( 16) 00:09:35.080 8217.206 - 8267.618: 97.6715% ( 13) 00:09:35.080 8267.618 - 8318.031: 97.7273% ( 11) 00:09:35.080 8318.031 - 8368.443: 97.7628% ( 7) 00:09:35.080 8368.443 - 8418.855: 97.8084% ( 9) 00:09:35.080 8418.855 - 8469.268: 97.8490% ( 8) 00:09:35.080 8469.268 - 8519.680: 97.8896% ( 8) 00:09:35.080 8519.680 - 8570.092: 97.9200% ( 6) 00:09:35.080 8570.092 - 8620.505: 97.9454% ( 5) 00:09:35.080 8620.505 - 8670.917: 97.9708% ( 5) 00:09:35.080 8670.917 - 8721.329: 97.9911% ( 4) 00:09:35.080 8721.329 - 8771.742: 98.0063% ( 3) 00:09:35.080 8771.742 - 8822.154: 98.0215% ( 3) 00:09:35.080 8822.154 - 8872.566: 98.0367% ( 3) 00:09:35.080 8872.566 - 8922.978: 98.0570% ( 4) 00:09:35.080 8973.391 - 9023.803: 98.0722% ( 3) 00:09:35.080 9023.803 - 9074.215: 98.1027% ( 6) 00:09:35.080 9074.215 - 9124.628: 98.1433% ( 8) 00:09:35.080 9124.628 - 9175.040: 98.2041% ( 12) 00:09:35.080 9175.040 - 9225.452: 98.2752% ( 14) 00:09:35.080 9225.452 - 9275.865: 98.3614% ( 17) 00:09:35.080 9275.865 - 9326.277: 98.4324% ( 14) 00:09:35.080 9326.277 - 9376.689: 98.5034% ( 14) 00:09:35.080 9376.689 - 9427.102: 98.5795% ( 15) 00:09:35.080 9427.102 - 9477.514: 98.6201% ( 8) 00:09:35.080 9477.514 - 9527.926: 98.6556% ( 7) 00:09:35.080 9527.926 - 9578.338: 98.6962% ( 8) 00:09:35.080 9578.338 - 9628.751: 98.7317% ( 7) 00:09:35.080 9628.751 - 9679.163: 98.7723% ( 8) 00:09:35.080 9679.163 - 9729.575: 98.8129% ( 8) 00:09:35.080 9729.575 - 9779.988: 99.0057% ( 38) 00:09:35.080 9779.988 - 9830.400: 99.0463% ( 8) 00:09:35.080 9830.400 - 9880.812: 99.0716% ( 5) 00:09:35.080 9880.812 - 9931.225: 99.0970% ( 5) 00:09:35.080 9931.225 - 9981.637: 99.1173% ( 4) 00:09:35.080 9981.637 - 10032.049: 99.1427% ( 5) 00:09:35.080 10032.049 - 10082.462: 99.1629% ( 4) 00:09:35.080 10082.462 - 10132.874: 99.1883% ( 5) 00:09:35.080 10132.874 - 10183.286: 99.2137% ( 5) 00:09:35.080 10183.286 - 10233.698: 99.2340% ( 4) 00:09:35.080 10233.698 - 10284.111: 99.2644% ( 6) 00:09:35.080 10284.111 - 10334.523: 99.2847% ( 4) 00:09:35.080 10334.523 - 10384.935: 99.3050% ( 4) 00:09:35.080 10384.935 - 10435.348: 99.3304% ( 5) 00:09:35.080 10435.348 - 10485.760: 99.3456% ( 3) 00:09:35.080 10485.760 - 10536.172: 99.3506% ( 1) 00:09:35.080 23492.135 - 23592.960: 99.3557% ( 1) 00:09:35.080 23592.960 - 23693.785: 99.3709% ( 3) 00:09:35.080 23693.785 - 23794.609: 99.3862% ( 3) 00:09:35.080 23794.609 - 23895.434: 99.4014% ( 3) 00:09:35.080 23895.434 - 23996.258: 99.4166% ( 3) 00:09:35.080 23996.258 - 24097.083: 99.4267% ( 2) 00:09:35.080 24097.083 - 24197.908: 99.4420% ( 3) 00:09:35.080 24197.908 - 24298.732: 99.4572% ( 3) 00:09:35.080 24298.732 - 24399.557: 99.4724% ( 3) 00:09:35.080 24399.557 - 24500.382: 99.4876% ( 3) 00:09:35.080 24500.382 - 24601.206: 99.5028% ( 3) 00:09:35.080 24601.206 - 24702.031: 99.5181% ( 3) 00:09:35.080 24702.031 - 24802.855: 99.5333% ( 3) 00:09:35.080 24802.855 - 24903.680: 99.5434% ( 2) 00:09:35.080 24903.680 - 25004.505: 99.5586% ( 3) 00:09:35.080 25004.505 - 25105.329: 99.5739% ( 3) 00:09:35.080 25105.329 - 25206.154: 99.5891% ( 3) 00:09:35.080 25206.154 - 25306.978: 99.6043% ( 3) 00:09:35.080 25306.978 - 25407.803: 99.6195% ( 3) 00:09:35.080 25407.803 - 25508.628: 99.6347% ( 3) 00:09:35.080 25508.628 - 25609.452: 99.6500% ( 3) 00:09:35.080 25609.452 - 25710.277: 99.6652% ( 3) 00:09:35.080 25710.277 - 25811.102: 99.6855% ( 4) 00:09:35.080 25811.102 - 26012.751: 99.7108% ( 5) 00:09:35.080 26012.751 - 26214.400: 99.7463% ( 7) 00:09:35.080 26214.400 - 26416.049: 99.7768% ( 6) 00:09:35.080 26416.049 - 26617.698: 99.8123% ( 7) 00:09:35.080 26617.698 - 26819.348: 99.8478% ( 7) 00:09:35.080 26819.348 - 27020.997: 99.8782% ( 6) 00:09:35.080 27020.997 - 27222.646: 99.9138% ( 7) 00:09:35.080 27222.646 - 27424.295: 99.9493% ( 7) 00:09:35.080 27424.295 - 27625.945: 99.9797% ( 6) 00:09:35.080 27625.945 - 27827.594: 100.0000% ( 4) 00:09:35.080 00:09:35.080 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:35.080 ============================================================================== 00:09:35.080 Range in us Cumulative IO count 00:09:35.080 4839.582 - 4864.788: 0.0051% ( 1) 00:09:35.080 4915.200 - 4940.406: 0.0101% ( 1) 00:09:35.080 4940.406 - 4965.612: 0.0203% ( 2) 00:09:35.080 4965.612 - 4990.818: 0.0254% ( 1) 00:09:35.080 4990.818 - 5016.025: 0.0355% ( 2) 00:09:35.080 5016.025 - 5041.231: 0.0406% ( 1) 00:09:35.080 5041.231 - 5066.437: 0.0507% ( 2) 00:09:35.080 5066.437 - 5091.643: 0.1065% ( 11) 00:09:35.080 5091.643 - 5116.849: 0.1725% ( 13) 00:09:35.080 5116.849 - 5142.055: 0.2334% ( 12) 00:09:35.080 5142.055 - 5167.262: 0.2739% ( 8) 00:09:35.080 5167.262 - 5192.468: 0.3348% ( 12) 00:09:35.080 5192.468 - 5217.674: 0.4515% ( 23) 00:09:35.080 5217.674 - 5242.880: 0.5986% ( 29) 00:09:35.081 5242.880 - 5268.086: 0.8218% ( 44) 00:09:35.081 5268.086 - 5293.292: 0.9892% ( 33) 00:09:35.081 5293.292 - 5318.498: 1.1769% ( 37) 00:09:35.081 5318.498 - 5343.705: 1.5777% ( 79) 00:09:35.081 5343.705 - 5368.911: 2.0191% ( 87) 00:09:35.081 5368.911 - 5394.117: 2.5670% ( 108) 00:09:35.081 5394.117 - 5419.323: 3.2620% ( 137) 00:09:35.081 5419.323 - 5444.529: 3.8707% ( 120) 00:09:35.081 5444.529 - 5469.735: 4.6418% ( 152) 00:09:35.081 5469.735 - 5494.942: 5.5499% ( 179) 00:09:35.081 5494.942 - 5520.148: 6.5747% ( 202) 00:09:35.081 5520.148 - 5545.354: 8.0357% ( 288) 00:09:35.081 5545.354 - 5570.560: 9.3801% ( 265) 00:09:35.081 5570.560 - 5595.766: 11.0593% ( 331) 00:09:35.081 5595.766 - 5620.972: 12.8247% ( 348) 00:09:35.081 5620.972 - 5646.178: 14.4836% ( 327) 00:09:35.081 5646.178 - 5671.385: 16.2744% ( 353) 00:09:35.081 5671.385 - 5696.591: 18.0043% ( 341) 00:09:35.081 5696.591 - 5721.797: 19.6378% ( 322) 00:09:35.081 5721.797 - 5747.003: 21.2307% ( 314) 00:09:35.081 5747.003 - 5772.209: 22.7222% ( 294) 00:09:35.081 5772.209 - 5797.415: 24.0716% ( 266) 00:09:35.081 5797.415 - 5822.622: 25.4058% ( 263) 00:09:35.081 5822.622 - 5847.828: 26.8770% ( 290) 00:09:35.081 5847.828 - 5873.034: 28.0692% ( 235) 00:09:35.081 5873.034 - 5898.240: 29.2056% ( 224) 00:09:35.081 5898.240 - 5923.446: 30.4383% ( 243) 00:09:35.081 5923.446 - 5948.652: 31.6406% ( 237) 00:09:35.081 5948.652 - 5973.858: 32.8531% ( 239) 00:09:35.081 5973.858 - 5999.065: 34.0351% ( 233) 00:09:35.081 5999.065 - 6024.271: 35.3642% ( 262) 00:09:35.081 6024.271 - 6049.477: 36.8405% ( 291) 00:09:35.081 6049.477 - 6074.683: 38.2001% ( 268) 00:09:35.081 6074.683 - 6099.889: 39.6713% ( 290) 00:09:35.081 6099.889 - 6125.095: 41.2338% ( 308) 00:09:35.081 6125.095 - 6150.302: 42.6187% ( 273) 00:09:35.081 6150.302 - 6175.508: 43.9377% ( 260) 00:09:35.081 6175.508 - 6200.714: 45.2212% ( 253) 00:09:35.081 6200.714 - 6225.920: 46.5858% ( 269) 00:09:35.081 6225.920 - 6251.126: 47.8642% ( 252) 00:09:35.081 6251.126 - 6276.332: 49.2999% ( 283) 00:09:35.081 6276.332 - 6301.538: 50.4972% ( 236) 00:09:35.081 6301.538 - 6326.745: 51.7908% ( 255) 00:09:35.081 6326.745 - 6351.951: 52.9424% ( 227) 00:09:35.081 6351.951 - 6377.157: 54.1700% ( 242) 00:09:35.081 6377.157 - 6402.363: 55.2963% ( 222) 00:09:35.081 6402.363 - 6427.569: 56.5848% ( 254) 00:09:35.081 6427.569 - 6452.775: 57.8835% ( 256) 00:09:35.081 6452.775 - 6503.188: 60.7904% ( 573) 00:09:35.081 6503.188 - 6553.600: 63.5958% ( 553) 00:09:35.081 6553.600 - 6604.012: 66.0917% ( 492) 00:09:35.081 6604.012 - 6654.425: 68.0651% ( 389) 00:09:35.081 6654.425 - 6704.837: 70.5408% ( 488) 00:09:35.081 6704.837 - 6755.249: 72.7273% ( 431) 00:09:35.081 6755.249 - 6805.662: 74.7717% ( 403) 00:09:35.081 6805.662 - 6856.074: 76.9125% ( 422) 00:09:35.081 6856.074 - 6906.486: 78.9164% ( 395) 00:09:35.081 6906.486 - 6956.898: 80.6311% ( 338) 00:09:35.081 6956.898 - 7007.311: 82.3052% ( 330) 00:09:35.081 7007.311 - 7057.723: 84.0402% ( 342) 00:09:35.081 7057.723 - 7108.135: 85.8310% ( 353) 00:09:35.081 7108.135 - 7158.548: 87.5710% ( 343) 00:09:35.081 7158.548 - 7208.960: 89.2198% ( 325) 00:09:35.081 7208.960 - 7259.372: 90.6149% ( 275) 00:09:35.081 7259.372 - 7309.785: 91.8425% ( 242) 00:09:35.081 7309.785 - 7360.197: 92.7455% ( 178) 00:09:35.081 7360.197 - 7410.609: 93.4761% ( 144) 00:09:35.081 7410.609 - 7461.022: 93.9681% ( 97) 00:09:35.081 7461.022 - 7511.434: 94.3943% ( 84) 00:09:35.081 7511.434 - 7561.846: 94.7545% ( 71) 00:09:35.081 7561.846 - 7612.258: 95.2060% ( 89) 00:09:35.081 7612.258 - 7662.671: 95.5154% ( 61) 00:09:35.081 7662.671 - 7713.083: 95.7995% ( 56) 00:09:35.081 7713.083 - 7763.495: 96.0684% ( 53) 00:09:35.081 7763.495 - 7813.908: 96.3880% ( 63) 00:09:35.081 7813.908 - 7864.320: 96.5503% ( 32) 00:09:35.081 7864.320 - 7914.732: 96.7025% ( 30) 00:09:35.081 7914.732 - 7965.145: 96.8598% ( 31) 00:09:35.081 7965.145 - 8015.557: 96.9968% ( 27) 00:09:35.081 8015.557 - 8065.969: 97.1337% ( 27) 00:09:35.081 8065.969 - 8116.382: 97.2808% ( 29) 00:09:35.081 8116.382 - 8166.794: 97.3823% ( 20) 00:09:35.081 8166.794 - 8217.206: 97.4939% ( 22) 00:09:35.081 8217.206 - 8267.618: 97.6106% ( 23) 00:09:35.081 8267.618 - 8318.031: 97.7222% ( 22) 00:09:35.081 8318.031 - 8368.443: 97.8237% ( 20) 00:09:35.081 8368.443 - 8418.855: 97.9759% ( 30) 00:09:35.081 8418.855 - 8469.268: 98.0925% ( 23) 00:09:35.081 8469.268 - 8519.680: 98.1585% ( 13) 00:09:35.081 8519.680 - 8570.092: 98.2752% ( 23) 00:09:35.081 8570.092 - 8620.505: 98.3056% ( 6) 00:09:35.081 8620.505 - 8670.917: 98.3462% ( 8) 00:09:35.081 8670.917 - 8721.329: 98.3716% ( 5) 00:09:35.081 8721.329 - 8771.742: 98.3969% ( 5) 00:09:35.081 8771.742 - 8822.154: 98.4172% ( 4) 00:09:35.081 8822.154 - 8872.566: 98.4426% ( 5) 00:09:35.081 8872.566 - 8922.978: 98.4832% ( 8) 00:09:35.081 8922.978 - 8973.391: 98.4984% ( 3) 00:09:35.081 8973.391 - 9023.803: 98.5085% ( 2) 00:09:35.081 9023.803 - 9074.215: 98.5187% ( 2) 00:09:35.081 9074.215 - 9124.628: 98.5288% ( 2) 00:09:35.081 9124.628 - 9175.040: 98.5948% ( 13) 00:09:35.081 9175.040 - 9225.452: 98.6709% ( 15) 00:09:35.081 9225.452 - 9275.865: 98.7419% ( 14) 00:09:35.081 9275.865 - 9326.277: 98.7622% ( 4) 00:09:35.081 9326.277 - 9376.689: 98.7672% ( 1) 00:09:35.081 9376.689 - 9427.102: 98.7825% ( 3) 00:09:35.081 9427.102 - 9477.514: 98.7875% ( 1) 00:09:35.081 9477.514 - 9527.926: 98.8028% ( 3) 00:09:35.081 9527.926 - 9578.338: 98.8180% ( 3) 00:09:35.081 9578.338 - 9628.751: 98.8535% ( 7) 00:09:35.081 9628.751 - 9679.163: 98.8789% ( 5) 00:09:35.081 9679.163 - 9729.575: 98.8941% ( 3) 00:09:35.081 9729.575 - 9779.988: 98.9144% ( 4) 00:09:35.081 9779.988 - 9830.400: 98.9347% ( 4) 00:09:35.081 9830.400 - 9880.812: 98.9448% ( 2) 00:09:35.081 9880.812 - 9931.225: 98.9752% ( 6) 00:09:35.081 9931.225 - 9981.637: 99.0158% ( 8) 00:09:35.081 9981.637 - 10032.049: 99.0463% ( 6) 00:09:35.081 10032.049 - 10082.462: 99.1173% ( 14) 00:09:35.081 10082.462 - 10132.874: 99.1680% ( 10) 00:09:35.081 10132.874 - 10183.286: 99.2086% ( 8) 00:09:35.081 10183.286 - 10233.698: 99.2238% ( 3) 00:09:35.081 10233.698 - 10284.111: 99.2289% ( 1) 00:09:35.081 10485.760 - 10536.172: 99.2390% ( 2) 00:09:35.081 10536.172 - 10586.585: 99.2492% ( 2) 00:09:35.081 10586.585 - 10636.997: 99.2695% ( 4) 00:09:35.081 10636.997 - 10687.409: 99.2847% ( 3) 00:09:35.081 10687.409 - 10737.822: 99.2999% ( 3) 00:09:35.081 10737.822 - 10788.234: 99.3151% ( 3) 00:09:35.081 10788.234 - 10838.646: 99.3304% ( 3) 00:09:35.081 10838.646 - 10889.058: 99.3506% ( 4) 00:09:35.081 23391.311 - 23492.135: 99.3709% ( 4) 00:09:35.081 23492.135 - 23592.960: 99.3811% ( 2) 00:09:35.081 23592.960 - 23693.785: 99.3912% ( 2) 00:09:35.081 23693.785 - 23794.609: 99.4115% ( 4) 00:09:35.081 23794.609 - 23895.434: 99.4166% ( 1) 00:09:35.081 23895.434 - 23996.258: 99.4267% ( 2) 00:09:35.081 23996.258 - 24097.083: 99.4470% ( 4) 00:09:35.081 24097.083 - 24197.908: 99.4623% ( 3) 00:09:35.081 24197.908 - 24298.732: 99.4775% ( 3) 00:09:35.081 24298.732 - 24399.557: 99.4978% ( 4) 00:09:35.081 24399.557 - 24500.382: 99.5079% ( 2) 00:09:35.081 24500.382 - 24601.206: 99.5231% ( 3) 00:09:35.081 24601.206 - 24702.031: 99.5384% ( 3) 00:09:35.081 24702.031 - 24802.855: 99.5536% ( 3) 00:09:35.081 24802.855 - 24903.680: 99.5688% ( 3) 00:09:35.081 24903.680 - 25004.505: 99.5840% ( 3) 00:09:35.081 25004.505 - 25105.329: 99.5992% ( 3) 00:09:35.081 25105.329 - 25206.154: 99.6195% ( 4) 00:09:35.081 25206.154 - 25306.978: 99.6347% ( 3) 00:09:35.081 25306.978 - 25407.803: 99.6500% ( 3) 00:09:35.081 25407.803 - 25508.628: 99.6703% ( 4) 00:09:35.081 25508.628 - 25609.452: 99.6855% ( 3) 00:09:35.081 25609.452 - 25710.277: 99.7007% ( 3) 00:09:35.081 25710.277 - 25811.102: 99.7159% ( 3) 00:09:35.081 25811.102 - 26012.751: 99.7514% ( 7) 00:09:35.081 26012.751 - 26214.400: 99.7869% ( 7) 00:09:35.081 26214.400 - 26416.049: 99.8174% ( 6) 00:09:35.081 26416.049 - 26617.698: 99.8529% ( 7) 00:09:35.081 26617.698 - 26819.348: 99.8884% ( 7) 00:09:35.081 26819.348 - 27020.997: 99.9239% ( 7) 00:09:35.081 27020.997 - 27222.646: 99.9594% ( 7) 00:09:35.081 27222.646 - 27424.295: 99.9949% ( 7) 00:09:35.081 27424.295 - 27625.945: 100.0000% ( 1) 00:09:35.081 00:09:35.081 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:35.081 ============================================================================== 00:09:35.081 Range in us Cumulative IO count 00:09:35.081 5116.849 - 5142.055: 0.0051% ( 1) 00:09:35.081 5192.468 - 5217.674: 0.0101% ( 1) 00:09:35.081 5217.674 - 5242.880: 0.0304% ( 4) 00:09:35.081 5242.880 - 5268.086: 0.0457% ( 3) 00:09:35.081 5268.086 - 5293.292: 0.0964% ( 10) 00:09:35.081 5293.292 - 5318.498: 0.1167% ( 4) 00:09:35.081 5318.498 - 5343.705: 0.1623% ( 9) 00:09:35.081 5343.705 - 5368.911: 0.2029% ( 8) 00:09:35.081 5368.911 - 5394.117: 0.2486% ( 9) 00:09:35.081 5394.117 - 5419.323: 0.3044% ( 11) 00:09:35.081 5419.323 - 5444.529: 0.3754% ( 14) 00:09:35.081 5444.529 - 5469.735: 0.5124% ( 27) 00:09:35.081 5469.735 - 5494.942: 0.7001% ( 37) 00:09:35.081 5494.942 - 5520.148: 0.9284% ( 45) 00:09:35.081 5520.148 - 5545.354: 1.2733% ( 68) 00:09:35.081 5545.354 - 5570.560: 1.6741% ( 79) 00:09:35.081 5570.560 - 5595.766: 2.0444% ( 73) 00:09:35.081 5595.766 - 5620.972: 2.5467% ( 99) 00:09:35.081 5620.972 - 5646.178: 3.0895% ( 107) 00:09:35.081 5646.178 - 5671.385: 3.6780% ( 116) 00:09:35.081 5671.385 - 5696.591: 4.3375% ( 130) 00:09:35.081 5696.591 - 5721.797: 5.1745% ( 165) 00:09:35.081 5721.797 - 5747.003: 6.0522% ( 173) 00:09:35.081 5747.003 - 5772.209: 7.1226% ( 211) 00:09:35.081 5772.209 - 5797.415: 8.2589% ( 224) 00:09:35.081 5797.415 - 5822.622: 9.5779% ( 260) 00:09:35.082 5822.622 - 5847.828: 11.2064% ( 321) 00:09:35.082 5847.828 - 5873.034: 12.7739% ( 309) 00:09:35.082 5873.034 - 5898.240: 14.7271% ( 385) 00:09:35.082 5898.240 - 5923.446: 16.6396% ( 377) 00:09:35.082 5923.446 - 5948.652: 18.8565% ( 437) 00:09:35.082 5948.652 - 5973.858: 21.4641% ( 514) 00:09:35.082 5973.858 - 5999.065: 23.7114% ( 443) 00:09:35.082 5999.065 - 6024.271: 25.9334% ( 438) 00:09:35.082 6024.271 - 6049.477: 28.3279% ( 472) 00:09:35.082 6049.477 - 6074.683: 30.4332% ( 415) 00:09:35.082 6074.683 - 6099.889: 32.5538% ( 418) 00:09:35.082 6099.889 - 6125.095: 35.3896% ( 559) 00:09:35.082 6125.095 - 6150.302: 37.6928% ( 454) 00:09:35.082 6150.302 - 6175.508: 39.9706% ( 449) 00:09:35.082 6175.508 - 6200.714: 43.2123% ( 639) 00:09:35.082 6200.714 - 6225.920: 46.0582% ( 561) 00:09:35.082 6225.920 - 6251.126: 48.6607% ( 513) 00:09:35.082 6251.126 - 6276.332: 51.5219% ( 564) 00:09:35.082 6276.332 - 6301.538: 54.1092% ( 510) 00:09:35.082 6301.538 - 6326.745: 57.0871% ( 587) 00:09:35.082 6326.745 - 6351.951: 59.4156% ( 459) 00:09:35.082 6351.951 - 6377.157: 61.6477% ( 440) 00:09:35.082 6377.157 - 6402.363: 63.6009% ( 385) 00:09:35.082 6402.363 - 6427.569: 65.2699% ( 329) 00:09:35.082 6427.569 - 6452.775: 67.2078% ( 382) 00:09:35.082 6452.775 - 6503.188: 70.1705% ( 584) 00:09:35.082 6503.188 - 6553.600: 72.9048% ( 539) 00:09:35.082 6553.600 - 6604.012: 75.4312% ( 498) 00:09:35.082 6604.012 - 6654.425: 77.8257% ( 472) 00:09:35.082 6654.425 - 6704.837: 79.9716% ( 423) 00:09:35.082 6704.837 - 6755.249: 82.2748% ( 454) 00:09:35.082 6755.249 - 6805.662: 84.1061% ( 361) 00:09:35.082 6805.662 - 6856.074: 85.8868% ( 351) 00:09:35.082 6856.074 - 6906.486: 87.5000% ( 318) 00:09:35.082 6906.486 - 6956.898: 88.8444% ( 265) 00:09:35.082 6956.898 - 7007.311: 89.9858% ( 225) 00:09:35.082 7007.311 - 7057.723: 90.9395% ( 188) 00:09:35.082 7057.723 - 7108.135: 91.7259% ( 155) 00:09:35.082 7108.135 - 7158.548: 92.3498% ( 123) 00:09:35.082 7158.548 - 7208.960: 92.8977% ( 108) 00:09:35.082 7208.960 - 7259.372: 93.3797% ( 95) 00:09:35.082 7259.372 - 7309.785: 93.7500% ( 73) 00:09:35.082 7309.785 - 7360.197: 94.1102% ( 71) 00:09:35.082 7360.197 - 7410.609: 94.3994% ( 57) 00:09:35.082 7410.609 - 7461.022: 94.6530% ( 50) 00:09:35.082 7461.022 - 7511.434: 94.9320% ( 55) 00:09:35.082 7511.434 - 7561.846: 95.2110% ( 55) 00:09:35.082 7561.846 - 7612.258: 95.5509% ( 67) 00:09:35.082 7612.258 - 7662.671: 95.7640% ( 42) 00:09:35.082 7662.671 - 7713.083: 95.9517% ( 37) 00:09:35.082 7713.083 - 7763.495: 96.1851% ( 46) 00:09:35.082 7763.495 - 7813.908: 96.3880% ( 40) 00:09:35.082 7813.908 - 7864.320: 96.5808% ( 38) 00:09:35.082 7864.320 - 7914.732: 96.9105% ( 65) 00:09:35.082 7914.732 - 7965.145: 97.0627% ( 30) 00:09:35.082 7965.145 - 8015.557: 97.3214% ( 51) 00:09:35.082 8015.557 - 8065.969: 97.4381% ( 23) 00:09:35.082 8065.969 - 8116.382: 97.5548% ( 23) 00:09:35.082 8116.382 - 8166.794: 97.6512% ( 19) 00:09:35.082 8166.794 - 8217.206: 97.7577% ( 21) 00:09:35.082 8217.206 - 8267.618: 97.8338% ( 15) 00:09:35.082 8267.618 - 8318.031: 97.8795% ( 9) 00:09:35.082 8318.031 - 8368.443: 97.9200% ( 8) 00:09:35.082 8368.443 - 8418.855: 97.9809% ( 12) 00:09:35.082 8418.855 - 8469.268: 98.0418% ( 12) 00:09:35.082 8469.268 - 8519.680: 98.0976% ( 11) 00:09:35.082 8519.680 - 8570.092: 98.1230% ( 5) 00:09:35.082 8570.092 - 8620.505: 98.1534% ( 6) 00:09:35.082 8620.505 - 8670.917: 98.1686% ( 3) 00:09:35.082 8670.917 - 8721.329: 98.1788% ( 2) 00:09:35.082 8721.329 - 8771.742: 98.2599% ( 16) 00:09:35.082 8771.742 - 8822.154: 98.4832% ( 44) 00:09:35.082 8822.154 - 8872.566: 98.5288% ( 9) 00:09:35.082 8872.566 - 8922.978: 98.5593% ( 6) 00:09:35.082 8922.978 - 8973.391: 98.5897% ( 6) 00:09:35.082 8973.391 - 9023.803: 98.6151% ( 5) 00:09:35.082 9023.803 - 9074.215: 98.6556% ( 8) 00:09:35.082 9074.215 - 9124.628: 98.6810% ( 5) 00:09:35.082 9124.628 - 9175.040: 98.7064% ( 5) 00:09:35.082 9175.040 - 9225.452: 98.7368% ( 6) 00:09:35.082 9225.452 - 9275.865: 98.7622% ( 5) 00:09:35.082 9275.865 - 9326.277: 98.7977% ( 7) 00:09:35.082 9326.277 - 9376.689: 98.8281% ( 6) 00:09:35.082 9376.689 - 9427.102: 98.8636% ( 7) 00:09:35.082 9427.102 - 9477.514: 98.8890% ( 5) 00:09:35.082 9477.514 - 9527.926: 98.9245% ( 7) 00:09:35.082 9527.926 - 9578.338: 98.9600% ( 7) 00:09:35.082 9578.338 - 9628.751: 98.9803% ( 4) 00:09:35.082 9628.751 - 9679.163: 98.9955% ( 3) 00:09:35.082 9679.163 - 9729.575: 99.0108% ( 3) 00:09:35.082 9729.575 - 9779.988: 99.0310% ( 4) 00:09:35.082 9779.988 - 9830.400: 99.0463% ( 3) 00:09:35.082 9830.400 - 9880.812: 99.0666% ( 4) 00:09:35.082 9880.812 - 9931.225: 99.0818% ( 3) 00:09:35.082 9931.225 - 9981.637: 99.0970% ( 3) 00:09:35.082 9981.637 - 10032.049: 99.1122% ( 3) 00:09:35.082 10032.049 - 10082.462: 99.1274% ( 3) 00:09:35.082 10082.462 - 10132.874: 99.1477% ( 4) 00:09:35.082 10132.874 - 10183.286: 99.1629% ( 3) 00:09:35.082 10183.286 - 10233.698: 99.1782% ( 3) 00:09:35.082 10233.698 - 10284.111: 99.1934% ( 3) 00:09:35.082 10284.111 - 10334.523: 99.2086% ( 3) 00:09:35.082 10334.523 - 10384.935: 99.2238% ( 3) 00:09:35.082 10384.935 - 10435.348: 99.2441% ( 4) 00:09:35.082 10435.348 - 10485.760: 99.2593% ( 3) 00:09:35.082 10485.760 - 10536.172: 99.2796% ( 4) 00:09:35.082 10536.172 - 10586.585: 99.2948% ( 3) 00:09:35.082 10586.585 - 10636.997: 99.3151% ( 4) 00:09:35.082 10636.997 - 10687.409: 99.3304% ( 3) 00:09:35.082 10687.409 - 10737.822: 99.3456% ( 3) 00:09:35.082 10737.822 - 10788.234: 99.3506% ( 1) 00:09:35.082 23088.837 - 23189.662: 99.3557% ( 1) 00:09:35.082 23492.135 - 23592.960: 99.4318% ( 15) 00:09:35.082 23592.960 - 23693.785: 99.4876% ( 11) 00:09:35.082 23693.785 - 23794.609: 99.5739% ( 17) 00:09:35.082 23794.609 - 23895.434: 99.5992% ( 5) 00:09:35.082 23895.434 - 23996.258: 99.6144% ( 3) 00:09:35.082 23996.258 - 24097.083: 99.6297% ( 3) 00:09:35.082 24097.083 - 24197.908: 99.6449% ( 3) 00:09:35.082 24197.908 - 24298.732: 99.6601% ( 3) 00:09:35.082 24298.732 - 24399.557: 99.6703% ( 2) 00:09:35.082 24399.557 - 24500.382: 99.6855% ( 3) 00:09:35.082 24500.382 - 24601.206: 99.7007% ( 3) 00:09:35.082 24601.206 - 24702.031: 99.7159% ( 3) 00:09:35.082 24702.031 - 24802.855: 99.7261% ( 2) 00:09:35.082 24802.855 - 24903.680: 99.7362% ( 2) 00:09:35.082 24903.680 - 25004.505: 99.7514% ( 3) 00:09:35.082 25004.505 - 25105.329: 99.7666% ( 3) 00:09:35.082 25105.329 - 25206.154: 99.7768% ( 2) 00:09:35.082 25206.154 - 25306.978: 99.7920% ( 3) 00:09:35.082 25306.978 - 25407.803: 99.8072% ( 3) 00:09:35.082 25407.803 - 25508.628: 99.8275% ( 4) 00:09:35.082 25508.628 - 25609.452: 99.8427% ( 3) 00:09:35.082 25609.452 - 25710.277: 99.8630% ( 4) 00:09:35.082 25710.277 - 25811.102: 99.8782% ( 3) 00:09:35.082 25811.102 - 26012.751: 99.9138% ( 7) 00:09:35.082 26012.751 - 26214.400: 99.9493% ( 7) 00:09:35.082 26214.400 - 26416.049: 99.9848% ( 7) 00:09:35.082 26416.049 - 26617.698: 100.0000% ( 3) 00:09:35.082 00:09:35.082 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:35.082 ============================================================================== 00:09:35.082 Range in us Cumulative IO count 00:09:35.082 5041.231 - 5066.437: 0.0051% ( 1) 00:09:35.082 5066.437 - 5091.643: 0.0101% ( 1) 00:09:35.082 5142.055 - 5167.262: 0.0203% ( 2) 00:09:35.082 5167.262 - 5192.468: 0.0254% ( 1) 00:09:35.082 5192.468 - 5217.674: 0.0355% ( 2) 00:09:35.082 5217.674 - 5242.880: 0.0406% ( 1) 00:09:35.082 5242.880 - 5268.086: 0.0507% ( 2) 00:09:35.082 5268.086 - 5293.292: 0.0609% ( 2) 00:09:35.082 5293.292 - 5318.498: 0.1065% ( 9) 00:09:35.082 5318.498 - 5343.705: 0.1573% ( 10) 00:09:35.082 5343.705 - 5368.911: 0.2080% ( 10) 00:09:35.082 5368.911 - 5394.117: 0.2486% ( 8) 00:09:35.082 5394.117 - 5419.323: 0.2892% ( 8) 00:09:35.082 5419.323 - 5444.529: 0.4058% ( 23) 00:09:35.082 5444.529 - 5469.735: 0.5580% ( 30) 00:09:35.082 5469.735 - 5494.942: 0.7762% ( 43) 00:09:35.082 5494.942 - 5520.148: 1.0095% ( 46) 00:09:35.082 5520.148 - 5545.354: 1.2733% ( 52) 00:09:35.082 5545.354 - 5570.560: 1.5828% ( 61) 00:09:35.082 5570.560 - 5595.766: 1.9785% ( 78) 00:09:35.082 5595.766 - 5620.972: 2.4756% ( 98) 00:09:35.082 5620.972 - 5646.178: 2.9525% ( 94) 00:09:35.082 5646.178 - 5671.385: 3.5968% ( 127) 00:09:35.082 5671.385 - 5696.591: 4.3780% ( 154) 00:09:35.082 5696.591 - 5721.797: 5.3876% ( 199) 00:09:35.082 5721.797 - 5747.003: 6.5544% ( 230) 00:09:35.082 5747.003 - 5772.209: 7.8582% ( 257) 00:09:35.082 5772.209 - 5797.415: 9.2634% ( 277) 00:09:35.082 5797.415 - 5822.622: 10.7295% ( 289) 00:09:35.082 5822.622 - 5847.828: 12.3630% ( 322) 00:09:35.082 5847.828 - 5873.034: 14.3364% ( 389) 00:09:35.082 5873.034 - 5898.240: 16.3048% ( 388) 00:09:35.082 5898.240 - 5923.446: 18.1970% ( 373) 00:09:35.082 5923.446 - 5948.652: 20.3074% ( 416) 00:09:35.082 5948.652 - 5973.858: 22.3366% ( 400) 00:09:35.082 5973.858 - 5999.065: 24.3811% ( 403) 00:09:35.082 5999.065 - 6024.271: 26.4915% ( 416) 00:09:35.082 6024.271 - 6049.477: 28.4649% ( 389) 00:09:35.082 6049.477 - 6074.683: 30.6970% ( 440) 00:09:35.082 6074.683 - 6099.889: 33.0053% ( 455) 00:09:35.082 6099.889 - 6125.095: 35.3490% ( 462) 00:09:35.082 6125.095 - 6150.302: 37.7029% ( 464) 00:09:35.082 6150.302 - 6175.508: 40.0822% ( 469) 00:09:35.082 6175.508 - 6200.714: 43.1412% ( 603) 00:09:35.082 6200.714 - 6225.920: 46.1648% ( 596) 00:09:35.082 6225.920 - 6251.126: 48.7977% ( 519) 00:09:35.082 6251.126 - 6276.332: 51.8314% ( 598) 00:09:35.082 6276.332 - 6301.538: 54.7484% ( 575) 00:09:35.082 6301.538 - 6326.745: 57.2392% ( 491) 00:09:35.082 6326.745 - 6351.951: 59.2634% ( 399) 00:09:35.083 6351.951 - 6377.157: 61.5006% ( 441) 00:09:35.083 6377.157 - 6402.363: 63.7074% ( 435) 00:09:35.083 6402.363 - 6427.569: 65.3003% ( 314) 00:09:35.083 6427.569 - 6452.775: 66.7969% ( 295) 00:09:35.083 6452.775 - 6503.188: 69.5059% ( 534) 00:09:35.083 6503.188 - 6553.600: 71.8395% ( 460) 00:09:35.083 6553.600 - 6604.012: 74.3050% ( 486) 00:09:35.083 6604.012 - 6654.425: 76.4864% ( 430) 00:09:35.083 6654.425 - 6704.837: 78.6881% ( 434) 00:09:35.083 6704.837 - 6755.249: 80.9000% ( 436) 00:09:35.083 6755.249 - 6805.662: 82.8024% ( 375) 00:09:35.083 6805.662 - 6856.074: 84.6388% ( 362) 00:09:35.083 6856.074 - 6906.486: 86.4499% ( 357) 00:09:35.083 6906.486 - 6956.898: 88.0885% ( 323) 00:09:35.083 6956.898 - 7007.311: 89.6155% ( 301) 00:09:35.083 7007.311 - 7057.723: 90.7569% ( 225) 00:09:35.083 7057.723 - 7108.135: 91.7614% ( 198) 00:09:35.083 7108.135 - 7158.548: 92.4209% ( 130) 00:09:35.083 7158.548 - 7208.960: 92.9738% ( 109) 00:09:35.083 7208.960 - 7259.372: 93.4405% ( 92) 00:09:35.083 7259.372 - 7309.785: 93.8007% ( 71) 00:09:35.083 7309.785 - 7360.197: 94.3233% ( 103) 00:09:35.083 7360.197 - 7410.609: 94.6682% ( 68) 00:09:35.083 7410.609 - 7461.022: 95.0132% ( 68) 00:09:35.083 7461.022 - 7511.434: 95.2973% ( 56) 00:09:35.083 7511.434 - 7561.846: 95.5712% ( 54) 00:09:35.083 7561.846 - 7612.258: 95.8705% ( 59) 00:09:35.083 7612.258 - 7662.671: 96.1597% ( 57) 00:09:35.083 7662.671 - 7713.083: 96.3829% ( 44) 00:09:35.083 7713.083 - 7763.495: 96.5960% ( 42) 00:09:35.083 7763.495 - 7813.908: 96.8040% ( 41) 00:09:35.083 7813.908 - 7864.320: 96.9308% ( 25) 00:09:35.083 7864.320 - 7914.732: 97.0424% ( 22) 00:09:35.083 7914.732 - 7965.145: 97.1540% ( 22) 00:09:35.083 7965.145 - 8015.557: 97.2707% ( 23) 00:09:35.083 8015.557 - 8065.969: 97.3925% ( 24) 00:09:35.083 8065.969 - 8116.382: 97.5193% ( 25) 00:09:35.083 8116.382 - 8166.794: 97.6461% ( 25) 00:09:35.083 8166.794 - 8217.206: 97.7019% ( 11) 00:09:35.083 8217.206 - 8267.618: 97.7679% ( 13) 00:09:35.083 8267.618 - 8318.031: 97.8338% ( 13) 00:09:35.083 8318.031 - 8368.443: 97.8845% ( 10) 00:09:35.083 8368.443 - 8418.855: 97.9606% ( 15) 00:09:35.083 8418.855 - 8469.268: 98.2143% ( 50) 00:09:35.083 8469.268 - 8519.680: 98.3360% ( 24) 00:09:35.083 8519.680 - 8570.092: 98.3817% ( 9) 00:09:35.083 8570.092 - 8620.505: 98.4172% ( 7) 00:09:35.083 8620.505 - 8670.917: 98.4527% ( 7) 00:09:35.083 8670.917 - 8721.329: 98.4832% ( 6) 00:09:35.083 8721.329 - 8771.742: 98.5034% ( 4) 00:09:35.083 8771.742 - 8822.154: 98.5339% ( 6) 00:09:35.083 8822.154 - 8872.566: 98.5593% ( 5) 00:09:35.083 8872.566 - 8922.978: 98.5948% ( 7) 00:09:35.083 8922.978 - 8973.391: 98.6201% ( 5) 00:09:35.083 8973.391 - 9023.803: 98.6556% ( 7) 00:09:35.083 9023.803 - 9074.215: 98.6709% ( 3) 00:09:35.083 9074.215 - 9124.628: 98.6810% ( 2) 00:09:35.083 9124.628 - 9175.040: 98.7267% ( 9) 00:09:35.083 9175.040 - 9225.452: 98.7672% ( 8) 00:09:35.083 9225.452 - 9275.865: 98.8028% ( 7) 00:09:35.083 9275.865 - 9326.277: 98.8383% ( 7) 00:09:35.083 9326.277 - 9376.689: 98.8636% ( 5) 00:09:35.083 9376.689 - 9427.102: 98.8941% ( 6) 00:09:35.083 9427.102 - 9477.514: 98.9144% ( 4) 00:09:35.083 9477.514 - 9527.926: 98.9397% ( 5) 00:09:35.083 9527.926 - 9578.338: 98.9702% ( 6) 00:09:35.083 9578.338 - 9628.751: 99.0209% ( 10) 00:09:35.083 9628.751 - 9679.163: 99.1427% ( 24) 00:09:35.083 9679.163 - 9729.575: 99.1579% ( 3) 00:09:35.083 9729.575 - 9779.988: 99.1731% ( 3) 00:09:35.083 9779.988 - 9830.400: 99.1883% ( 3) 00:09:35.083 9830.400 - 9880.812: 99.1985% ( 2) 00:09:35.083 9880.812 - 9931.225: 99.2137% ( 3) 00:09:35.083 9931.225 - 9981.637: 99.2238% ( 2) 00:09:35.083 9981.637 - 10032.049: 99.2390% ( 3) 00:09:35.083 10032.049 - 10082.462: 99.2492% ( 2) 00:09:35.083 10082.462 - 10132.874: 99.2644% ( 3) 00:09:35.083 10132.874 - 10183.286: 99.2746% ( 2) 00:09:35.083 10183.286 - 10233.698: 99.2847% ( 2) 00:09:35.083 10233.698 - 10284.111: 99.2999% ( 3) 00:09:35.083 10284.111 - 10334.523: 99.3151% ( 3) 00:09:35.083 10334.523 - 10384.935: 99.3304% ( 3) 00:09:35.083 10384.935 - 10435.348: 99.3456% ( 3) 00:09:35.083 10435.348 - 10485.760: 99.3506% ( 1) 00:09:35.083 23088.837 - 23189.662: 99.3912% ( 8) 00:09:35.083 23189.662 - 23290.486: 99.4420% ( 10) 00:09:35.083 23290.486 - 23391.311: 99.4825% ( 8) 00:09:35.083 23391.311 - 23492.135: 99.5536% ( 14) 00:09:35.083 23492.135 - 23592.960: 99.6500% ( 19) 00:09:35.083 23592.960 - 23693.785: 99.6855% ( 7) 00:09:35.083 23693.785 - 23794.609: 99.7007% ( 3) 00:09:35.083 23794.609 - 23895.434: 99.7159% ( 3) 00:09:35.083 23895.434 - 23996.258: 99.7261% ( 2) 00:09:35.083 23996.258 - 24097.083: 99.7413% ( 3) 00:09:35.083 24097.083 - 24197.908: 99.7514% ( 2) 00:09:35.083 24197.908 - 24298.732: 99.7666% ( 3) 00:09:35.083 24298.732 - 24399.557: 99.7819% ( 3) 00:09:35.083 24399.557 - 24500.382: 99.7971% ( 3) 00:09:35.083 24500.382 - 24601.206: 99.8072% ( 2) 00:09:35.083 24601.206 - 24702.031: 99.8224% ( 3) 00:09:35.083 24702.031 - 24802.855: 99.8427% ( 4) 00:09:35.083 24802.855 - 24903.680: 99.8529% ( 2) 00:09:35.083 24903.680 - 25004.505: 99.8732% ( 4) 00:09:35.083 25004.505 - 25105.329: 99.8884% ( 3) 00:09:35.083 25105.329 - 25206.154: 99.9036% ( 3) 00:09:35.083 25206.154 - 25306.978: 99.9188% ( 3) 00:09:35.083 25306.978 - 25407.803: 99.9341% ( 3) 00:09:35.083 25407.803 - 25508.628: 99.9493% ( 3) 00:09:35.083 25508.628 - 25609.452: 99.9645% ( 3) 00:09:35.083 25609.452 - 25710.277: 99.9797% ( 3) 00:09:35.083 25710.277 - 25811.102: 99.9949% ( 3) 00:09:35.083 25811.102 - 26012.751: 100.0000% ( 1) 00:09:35.083 00:09:35.083 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:35.083 ============================================================================== 00:09:35.083 Range in us Cumulative IO count 00:09:35.083 5066.437 - 5091.643: 0.0051% ( 1) 00:09:35.083 5192.468 - 5217.674: 0.0152% ( 2) 00:09:35.083 5242.880 - 5268.086: 0.0254% ( 2) 00:09:35.083 5293.292 - 5318.498: 0.0355% ( 2) 00:09:35.083 5318.498 - 5343.705: 0.0457% ( 2) 00:09:35.083 5343.705 - 5368.911: 0.0964% ( 10) 00:09:35.083 5368.911 - 5394.117: 0.1776% ( 16) 00:09:35.083 5394.117 - 5419.323: 0.2689% ( 18) 00:09:35.083 5419.323 - 5444.529: 0.4261% ( 31) 00:09:35.083 5444.529 - 5469.735: 0.5530% ( 25) 00:09:35.083 5469.735 - 5494.942: 0.7204% ( 33) 00:09:35.083 5494.942 - 5520.148: 0.9030% ( 36) 00:09:35.083 5520.148 - 5545.354: 1.1009% ( 39) 00:09:35.083 5545.354 - 5570.560: 1.4103% ( 61) 00:09:35.083 5570.560 - 5595.766: 1.7553% ( 68) 00:09:35.083 5595.766 - 5620.972: 2.1713% ( 82) 00:09:35.083 5620.972 - 5646.178: 2.6532% ( 95) 00:09:35.083 5646.178 - 5671.385: 3.2366% ( 115) 00:09:35.083 5671.385 - 5696.591: 4.0686% ( 164) 00:09:35.083 5696.591 - 5721.797: 5.0020% ( 184) 00:09:35.083 5721.797 - 5747.003: 6.1536% ( 227) 00:09:35.083 5747.003 - 5772.209: 7.4371% ( 253) 00:09:35.083 5772.209 - 5797.415: 8.8068% ( 270) 00:09:35.083 5797.415 - 5822.622: 10.3998% ( 314) 00:09:35.083 5822.622 - 5847.828: 11.9572% ( 307) 00:09:35.083 5847.828 - 5873.034: 13.7733% ( 358) 00:09:35.083 5873.034 - 5898.240: 15.5844% ( 357) 00:09:35.083 5898.240 - 5923.446: 17.5274% ( 383) 00:09:35.083 5923.446 - 5948.652: 19.5668% ( 402) 00:09:35.083 5948.652 - 5973.858: 21.6366% ( 408) 00:09:35.083 5973.858 - 5999.065: 23.8636% ( 439) 00:09:35.083 5999.065 - 6024.271: 26.0552% ( 432) 00:09:35.083 6024.271 - 6049.477: 28.1656% ( 416) 00:09:35.083 6049.477 - 6074.683: 30.5296% ( 466) 00:09:35.083 6074.683 - 6099.889: 32.8277% ( 453) 00:09:35.083 6099.889 - 6125.095: 35.6078% ( 548) 00:09:35.083 6125.095 - 6150.302: 38.1494% ( 501) 00:09:35.083 6150.302 - 6175.508: 40.3663% ( 437) 00:09:35.083 6175.508 - 6200.714: 42.9332% ( 506) 00:09:35.083 6200.714 - 6225.920: 46.2104% ( 646) 00:09:35.083 6225.920 - 6251.126: 49.2340% ( 596) 00:09:35.083 6251.126 - 6276.332: 52.2727% ( 599) 00:09:35.083 6276.332 - 6301.538: 54.7484% ( 488) 00:09:35.083 6301.538 - 6326.745: 57.2849% ( 500) 00:09:35.083 6326.745 - 6351.951: 59.3243% ( 402) 00:09:35.083 6351.951 - 6377.157: 61.2774% ( 385) 00:09:35.083 6377.157 - 6402.363: 63.4943% ( 437) 00:09:35.083 6402.363 - 6427.569: 65.2496% ( 346) 00:09:35.083 6427.569 - 6452.775: 66.8730% ( 320) 00:09:35.083 6452.775 - 6503.188: 70.0893% ( 634) 00:09:35.083 6503.188 - 6553.600: 72.9759% ( 569) 00:09:35.083 6553.600 - 6604.012: 75.3754% ( 473) 00:09:35.083 6604.012 - 6654.425: 77.6177% ( 442) 00:09:35.083 6654.425 - 6704.837: 79.6672% ( 404) 00:09:35.083 6704.837 - 6755.249: 81.7015% ( 401) 00:09:35.083 6755.249 - 6805.662: 83.5582% ( 366) 00:09:35.083 6805.662 - 6856.074: 85.4099% ( 365) 00:09:35.083 6856.074 - 6906.486: 87.3478% ( 382) 00:09:35.083 6906.486 - 6956.898: 88.9103% ( 308) 00:09:35.083 6956.898 - 7007.311: 90.2039% ( 255) 00:09:35.083 7007.311 - 7057.723: 91.2338% ( 203) 00:09:35.083 7057.723 - 7108.135: 91.9237% ( 136) 00:09:35.083 7108.135 - 7158.548: 92.4564% ( 105) 00:09:35.083 7158.548 - 7208.960: 93.0854% ( 124) 00:09:35.083 7208.960 - 7259.372: 93.4558% ( 73) 00:09:35.083 7259.372 - 7309.785: 93.7551% ( 59) 00:09:35.083 7309.785 - 7360.197: 94.0290% ( 54) 00:09:35.083 7360.197 - 7410.609: 94.2776% ( 49) 00:09:35.083 7410.609 - 7461.022: 94.5160% ( 47) 00:09:35.083 7461.022 - 7511.434: 94.7545% ( 47) 00:09:35.083 7511.434 - 7561.846: 94.9777% ( 44) 00:09:35.083 7561.846 - 7612.258: 95.2212% ( 48) 00:09:35.083 7612.258 - 7662.671: 95.4647% ( 48) 00:09:35.083 7662.671 - 7713.083: 95.8553% ( 77) 00:09:35.083 7713.083 - 7763.495: 96.1191% ( 52) 00:09:35.083 7763.495 - 7813.908: 96.3677% ( 49) 00:09:35.084 7813.908 - 7864.320: 96.6264% ( 51) 00:09:35.084 7864.320 - 7914.732: 96.8091% ( 36) 00:09:35.084 7914.732 - 7965.145: 97.0272% ( 43) 00:09:35.084 7965.145 - 8015.557: 97.2606% ( 46) 00:09:35.084 8015.557 - 8065.969: 97.5802% ( 63) 00:09:35.084 8065.969 - 8116.382: 97.8237% ( 48) 00:09:35.084 8116.382 - 8166.794: 97.8998% ( 15) 00:09:35.084 8166.794 - 8217.206: 97.9657% ( 13) 00:09:35.084 8217.206 - 8267.618: 98.0215% ( 11) 00:09:35.084 8267.618 - 8318.031: 98.0925% ( 14) 00:09:35.084 8318.031 - 8368.443: 98.1534% ( 12) 00:09:35.084 8368.443 - 8418.855: 98.2041% ( 10) 00:09:35.084 8418.855 - 8469.268: 98.2397% ( 7) 00:09:35.084 8469.268 - 8519.680: 98.2802% ( 8) 00:09:35.084 8519.680 - 8570.092: 98.3208% ( 8) 00:09:35.084 8570.092 - 8620.505: 98.3563% ( 7) 00:09:35.084 8620.505 - 8670.917: 98.3969% ( 8) 00:09:35.084 8670.917 - 8721.329: 98.4476% ( 10) 00:09:35.084 8721.329 - 8771.742: 98.4933% ( 9) 00:09:35.084 8771.742 - 8822.154: 98.5187% ( 5) 00:09:35.084 8822.154 - 8872.566: 98.5440% ( 5) 00:09:35.084 8872.566 - 8922.978: 98.5694% ( 5) 00:09:35.084 8922.978 - 8973.391: 98.5897% ( 4) 00:09:35.084 8973.391 - 9023.803: 98.6049% ( 3) 00:09:35.084 9023.803 - 9074.215: 98.6252% ( 4) 00:09:35.084 9074.215 - 9124.628: 98.6404% ( 3) 00:09:35.084 9124.628 - 9175.040: 98.6658% ( 5) 00:09:35.084 9175.040 - 9225.452: 98.6861% ( 4) 00:09:35.084 9225.452 - 9275.865: 98.7013% ( 3) 00:09:35.084 9275.865 - 9326.277: 98.7216% ( 4) 00:09:35.084 9326.277 - 9376.689: 98.7571% ( 7) 00:09:35.084 9376.689 - 9427.102: 98.7825% ( 5) 00:09:35.084 9427.102 - 9477.514: 98.8129% ( 6) 00:09:35.084 9477.514 - 9527.926: 98.8433% ( 6) 00:09:35.084 9527.926 - 9578.338: 98.8687% ( 5) 00:09:35.084 9578.338 - 9628.751: 98.8991% ( 6) 00:09:35.084 9628.751 - 9679.163: 98.9194% ( 4) 00:09:35.084 9679.163 - 9729.575: 98.9499% ( 6) 00:09:35.084 9729.575 - 9779.988: 99.1274% ( 35) 00:09:35.084 9779.988 - 9830.400: 99.1629% ( 7) 00:09:35.084 9830.400 - 9880.812: 99.1782% ( 3) 00:09:35.084 9880.812 - 9931.225: 99.1934% ( 3) 00:09:35.084 9931.225 - 9981.637: 99.2086% ( 3) 00:09:35.084 9981.637 - 10032.049: 99.2188% ( 2) 00:09:35.084 10032.049 - 10082.462: 99.2340% ( 3) 00:09:35.084 10082.462 - 10132.874: 99.2492% ( 3) 00:09:35.084 10132.874 - 10183.286: 99.2644% ( 3) 00:09:35.084 10183.286 - 10233.698: 99.2746% ( 2) 00:09:35.084 10233.698 - 10284.111: 99.2898% ( 3) 00:09:35.084 10284.111 - 10334.523: 99.3050% ( 3) 00:09:35.084 10334.523 - 10384.935: 99.3202% ( 3) 00:09:35.084 10384.935 - 10435.348: 99.3354% ( 3) 00:09:35.084 10435.348 - 10485.760: 99.3506% ( 3) 00:09:35.084 22181.415 - 22282.240: 99.3811% ( 6) 00:09:35.084 22282.240 - 22383.065: 99.4267% ( 9) 00:09:35.084 22383.065 - 22483.889: 99.4775% ( 10) 00:09:35.084 22483.889 - 22584.714: 99.5079% ( 6) 00:09:35.084 22584.714 - 22685.538: 99.5485% ( 8) 00:09:35.084 22685.538 - 22786.363: 99.5840% ( 7) 00:09:35.084 22786.363 - 22887.188: 99.6246% ( 8) 00:09:35.084 22887.188 - 22988.012: 99.6855% ( 12) 00:09:35.084 22988.012 - 23088.837: 99.7210% ( 7) 00:09:35.084 23088.837 - 23189.662: 99.7413% ( 4) 00:09:35.084 23189.662 - 23290.486: 99.7514% ( 2) 00:09:35.084 23290.486 - 23391.311: 99.7666% ( 3) 00:09:35.084 23391.311 - 23492.135: 99.7819% ( 3) 00:09:35.084 23492.135 - 23592.960: 99.7971% ( 3) 00:09:35.084 23592.960 - 23693.785: 99.8174% ( 4) 00:09:35.084 23693.785 - 23794.609: 99.8326% ( 3) 00:09:35.084 23794.609 - 23895.434: 99.8478% ( 3) 00:09:35.084 23895.434 - 23996.258: 99.8630% ( 3) 00:09:35.084 23996.258 - 24097.083: 99.8833% ( 4) 00:09:35.084 24097.083 - 24197.908: 99.8935% ( 2) 00:09:35.084 24197.908 - 24298.732: 99.9138% ( 4) 00:09:35.084 24298.732 - 24399.557: 99.9290% ( 3) 00:09:35.084 24399.557 - 24500.382: 99.9493% ( 4) 00:09:35.084 24500.382 - 24601.206: 99.9645% ( 3) 00:09:35.084 24601.206 - 24702.031: 99.9797% ( 3) 00:09:35.084 24702.031 - 24802.855: 99.9949% ( 3) 00:09:35.084 24802.855 - 24903.680: 100.0000% ( 1) 00:09:35.084 00:09:35.084 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:35.084 ============================================================================== 00:09:35.084 Range in us Cumulative IO count 00:09:35.084 5167.262 - 5192.468: 0.0051% ( 1) 00:09:35.084 5217.674 - 5242.880: 0.0304% ( 5) 00:09:35.084 5242.880 - 5268.086: 0.0355% ( 1) 00:09:35.084 5293.292 - 5318.498: 0.0609% ( 5) 00:09:35.084 5318.498 - 5343.705: 0.0964% ( 7) 00:09:35.084 5343.705 - 5368.911: 0.1420% ( 9) 00:09:35.084 5368.911 - 5394.117: 0.2435% ( 20) 00:09:35.084 5394.117 - 5419.323: 0.3399% ( 19) 00:09:35.084 5419.323 - 5444.529: 0.4921% ( 30) 00:09:35.084 5444.529 - 5469.735: 0.6646% ( 34) 00:09:35.084 5469.735 - 5494.942: 0.8624% ( 39) 00:09:35.084 5494.942 - 5520.148: 1.0806% ( 43) 00:09:35.084 5520.148 - 5545.354: 1.3444% ( 52) 00:09:35.084 5545.354 - 5570.560: 1.6487% ( 60) 00:09:35.084 5570.560 - 5595.766: 1.9836% ( 66) 00:09:35.084 5595.766 - 5620.972: 2.3894% ( 80) 00:09:35.084 5620.972 - 5646.178: 2.8612% ( 93) 00:09:35.084 5646.178 - 5671.385: 3.5055% ( 127) 00:09:35.084 5671.385 - 5696.591: 4.2157% ( 140) 00:09:35.084 5696.591 - 5721.797: 5.1491% ( 184) 00:09:35.084 5721.797 - 5747.003: 6.1891% ( 205) 00:09:35.084 5747.003 - 5772.209: 7.4422% ( 247) 00:09:35.084 5772.209 - 5797.415: 8.8271% ( 273) 00:09:35.084 5797.415 - 5822.622: 10.3541% ( 301) 00:09:35.084 5822.622 - 5847.828: 12.1347% ( 351) 00:09:35.084 5847.828 - 5873.034: 14.1589% ( 399) 00:09:35.084 5873.034 - 5898.240: 16.2794% ( 418) 00:09:35.084 5898.240 - 5923.446: 18.1209% ( 363) 00:09:35.084 5923.446 - 5948.652: 19.9980% ( 370) 00:09:35.084 5948.652 - 5973.858: 22.0373% ( 402) 00:09:35.084 5973.858 - 5999.065: 24.2492% ( 436) 00:09:35.084 5999.065 - 6024.271: 26.5168% ( 447) 00:09:35.084 6024.271 - 6049.477: 28.7794% ( 446) 00:09:35.084 6049.477 - 6074.683: 31.1942% ( 476) 00:09:35.084 6074.683 - 6099.889: 33.2589% ( 407) 00:09:35.084 6099.889 - 6125.095: 35.7549% ( 492) 00:09:35.084 6125.095 - 6150.302: 38.3320% ( 508) 00:09:35.084 6150.302 - 6175.508: 40.7670% ( 480) 00:09:35.084 6175.508 - 6200.714: 43.5826% ( 555) 00:09:35.084 6200.714 - 6225.920: 46.4235% ( 560) 00:09:35.084 6225.920 - 6251.126: 49.3912% ( 585) 00:09:35.084 6251.126 - 6276.332: 52.0343% ( 521) 00:09:35.084 6276.332 - 6301.538: 54.7382% ( 533) 00:09:35.084 6301.538 - 6326.745: 56.8283% ( 412) 00:09:35.084 6326.745 - 6351.951: 59.2685% ( 481) 00:09:35.084 6351.951 - 6377.157: 61.1201% ( 365) 00:09:35.084 6377.157 - 6402.363: 62.9464% ( 360) 00:09:35.084 6402.363 - 6427.569: 64.6408% ( 334) 00:09:35.084 6427.569 - 6452.775: 66.3758% ( 342) 00:09:35.084 6452.775 - 6503.188: 69.1254% ( 542) 00:09:35.084 6503.188 - 6553.600: 71.6213% ( 492) 00:09:35.084 6553.600 - 6604.012: 74.2644% ( 521) 00:09:35.084 6604.012 - 6654.425: 76.8314% ( 506) 00:09:35.084 6654.425 - 6704.837: 79.0787% ( 443) 00:09:35.084 6704.837 - 6755.249: 81.2703% ( 432) 00:09:35.084 6755.249 - 6805.662: 83.2640% ( 393) 00:09:35.084 6805.662 - 6856.074: 85.2222% ( 386) 00:09:35.084 6856.074 - 6906.486: 86.9166% ( 334) 00:09:35.084 6906.486 - 6956.898: 88.6313% ( 338) 00:09:35.084 6956.898 - 7007.311: 89.9351% ( 257) 00:09:35.084 7007.311 - 7057.723: 90.9598% ( 202) 00:09:35.084 7057.723 - 7108.135: 91.8172% ( 169) 00:09:35.084 7108.135 - 7158.548: 92.5172% ( 138) 00:09:35.084 7158.548 - 7208.960: 93.0651% ( 108) 00:09:35.084 7208.960 - 7259.372: 93.4761% ( 81) 00:09:35.084 7259.372 - 7309.785: 93.8109% ( 66) 00:09:35.084 7309.785 - 7360.197: 94.1051% ( 58) 00:09:35.084 7360.197 - 7410.609: 94.3791% ( 54) 00:09:35.084 7410.609 - 7461.022: 94.6378% ( 51) 00:09:35.084 7461.022 - 7511.434: 94.9371% ( 59) 00:09:35.084 7511.434 - 7561.846: 95.3429% ( 80) 00:09:35.084 7561.846 - 7612.258: 95.6828% ( 67) 00:09:35.084 7612.258 - 7662.671: 96.0126% ( 65) 00:09:35.084 7662.671 - 7713.083: 96.3778% ( 72) 00:09:35.084 7713.083 - 7763.495: 96.5706% ( 38) 00:09:35.084 7763.495 - 7813.908: 96.7228% ( 30) 00:09:35.084 7813.908 - 7864.320: 96.8851% ( 32) 00:09:35.084 7864.320 - 7914.732: 97.0526% ( 33) 00:09:35.084 7914.732 - 7965.145: 97.1794% ( 25) 00:09:35.085 7965.145 - 8015.557: 97.3113% ( 26) 00:09:35.085 8015.557 - 8065.969: 97.5142% ( 40) 00:09:35.085 8065.969 - 8116.382: 97.6360% ( 24) 00:09:35.085 8116.382 - 8166.794: 97.7273% ( 18) 00:09:35.085 8166.794 - 8217.206: 97.8338% ( 21) 00:09:35.085 8217.206 - 8267.618: 98.0063% ( 34) 00:09:35.085 8267.618 - 8318.031: 98.1078% ( 20) 00:09:35.085 8318.031 - 8368.443: 98.1889% ( 16) 00:09:35.085 8368.443 - 8418.855: 98.2498% ( 12) 00:09:35.085 8418.855 - 8469.268: 98.3005% ( 10) 00:09:35.085 8469.268 - 8519.680: 98.3563% ( 11) 00:09:35.085 8519.680 - 8570.092: 98.4223% ( 13) 00:09:35.085 8570.092 - 8620.505: 98.4578% ( 7) 00:09:35.085 8620.505 - 8670.917: 98.5034% ( 9) 00:09:35.085 8670.917 - 8721.329: 98.5440% ( 8) 00:09:35.085 8721.329 - 8771.742: 98.5897% ( 9) 00:09:35.085 8771.742 - 8822.154: 98.6303% ( 8) 00:09:35.085 8822.154 - 8872.566: 98.6455% ( 3) 00:09:35.085 8872.566 - 8922.978: 98.6658% ( 4) 00:09:35.085 8922.978 - 8973.391: 98.6810% ( 3) 00:09:35.085 8973.391 - 9023.803: 98.7013% ( 4) 00:09:35.085 9225.452 - 9275.865: 98.7267% ( 5) 00:09:35.085 9275.865 - 9326.277: 98.7520% ( 5) 00:09:35.085 9326.277 - 9376.689: 98.7774% ( 5) 00:09:35.085 9376.689 - 9427.102: 98.8129% ( 7) 00:09:35.085 9427.102 - 9477.514: 98.8433% ( 6) 00:09:35.085 9477.514 - 9527.926: 98.8687% ( 5) 00:09:35.085 9527.926 - 9578.338: 98.8991% ( 6) 00:09:35.085 9578.338 - 9628.751: 98.9245% ( 5) 00:09:35.085 9628.751 - 9679.163: 98.9448% ( 4) 00:09:35.085 9679.163 - 9729.575: 98.9702% ( 5) 00:09:35.085 9729.575 - 9779.988: 99.0767% ( 21) 00:09:35.085 9779.988 - 9830.400: 99.1173% ( 8) 00:09:35.085 9830.400 - 9880.812: 99.1376% ( 4) 00:09:35.085 9880.812 - 9931.225: 99.1528% ( 3) 00:09:35.085 9931.225 - 9981.637: 99.1629% ( 2) 00:09:35.085 9981.637 - 10032.049: 99.1782% ( 3) 00:09:35.085 10032.049 - 10082.462: 99.1934% ( 3) 00:09:35.085 10082.462 - 10132.874: 99.2086% ( 3) 00:09:35.085 10132.874 - 10183.286: 99.2238% ( 3) 00:09:35.085 10183.286 - 10233.698: 99.2390% ( 3) 00:09:35.085 10233.698 - 10284.111: 99.2543% ( 3) 00:09:35.085 10284.111 - 10334.523: 99.2644% ( 2) 00:09:35.085 10334.523 - 10384.935: 99.2796% ( 3) 00:09:35.085 10384.935 - 10435.348: 99.2948% ( 3) 00:09:35.085 10435.348 - 10485.760: 99.3101% ( 3) 00:09:35.085 10485.760 - 10536.172: 99.3304% ( 4) 00:09:35.085 10536.172 - 10586.585: 99.3456% ( 3) 00:09:35.085 10586.585 - 10636.997: 99.3506% ( 1) 00:09:35.085 20769.871 - 20870.695: 99.3557% ( 1) 00:09:35.085 20971.520 - 21072.345: 99.3608% ( 1) 00:09:35.085 21576.468 - 21677.292: 99.3659% ( 1) 00:09:35.085 21677.292 - 21778.117: 99.3709% ( 1) 00:09:35.085 21878.942 - 21979.766: 99.4115% ( 8) 00:09:35.085 21979.766 - 22080.591: 99.4927% ( 16) 00:09:35.085 22080.591 - 22181.415: 99.5789% ( 17) 00:09:35.085 22181.415 - 22282.240: 99.6753% ( 19) 00:09:35.085 22282.240 - 22383.065: 99.7869% ( 22) 00:09:35.085 22383.065 - 22483.889: 99.8377% ( 10) 00:09:35.085 22483.889 - 22584.714: 99.8529% ( 3) 00:09:35.085 22584.714 - 22685.538: 99.8681% ( 3) 00:09:35.085 22685.538 - 22786.363: 99.8833% ( 3) 00:09:35.085 22786.363 - 22887.188: 99.8985% ( 3) 00:09:35.085 22887.188 - 22988.012: 99.9188% ( 4) 00:09:35.085 22988.012 - 23088.837: 99.9341% ( 3) 00:09:35.085 23088.837 - 23189.662: 99.9493% ( 3) 00:09:35.085 23189.662 - 23290.486: 99.9696% ( 4) 00:09:35.085 23290.486 - 23391.311: 99.9848% ( 3) 00:09:35.085 23391.311 - 23492.135: 100.0000% ( 3) 00:09:35.085 00:09:35.085 09:47:23 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:35.085 00:09:35.085 real 0m2.598s 00:09:35.085 user 0m2.303s 00:09:35.085 sys 0m0.197s 00:09:35.085 09:47:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.085 ************************************ 00:09:35.085 END TEST nvme_perf 00:09:35.085 ************************************ 00:09:35.085 09:47:23 -- common/autotest_common.sh@10 -- # set +x 00:09:35.085 09:47:23 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:35.085 09:47:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:35.085 09:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.085 09:47:23 -- common/autotest_common.sh@10 -- # set +x 00:09:35.085 ************************************ 00:09:35.085 START TEST nvme_hello_world 00:09:35.085 ************************************ 00:09:35.085 09:47:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:35.343 Initializing NVMe Controllers 00:09:35.343 Attached to 0000:00:09.0 00:09:35.343 Namespace ID: 1 size: 1GB 00:09:35.343 Attached to 0000:00:06.0 00:09:35.343 Namespace ID: 1 size: 6GB 00:09:35.343 Attached to 0000:00:07.0 00:09:35.343 Namespace ID: 1 size: 5GB 00:09:35.343 Attached to 0000:00:08.0 00:09:35.343 Namespace ID: 1 size: 4GB 00:09:35.343 Namespace ID: 2 size: 4GB 00:09:35.343 Namespace ID: 3 size: 4GB 00:09:35.343 Initialization complete. 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 INFO: using host memory buffer for IO 00:09:35.343 Hello world! 00:09:35.343 00:09:35.343 real 0m0.247s 00:09:35.343 user 0m0.126s 00:09:35.343 sys 0m0.087s 00:09:35.343 09:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.343 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.343 ************************************ 00:09:35.343 END TEST nvme_hello_world 00:09:35.343 ************************************ 00:09:35.343 09:47:24 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:35.343 09:47:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:35.343 09:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.343 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.343 ************************************ 00:09:35.343 START TEST nvme_sgl 00:09:35.343 ************************************ 00:09:35.343 09:47:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:35.343 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:35.343 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:35.343 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:35.343 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:35.344 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:35.602 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:35.602 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:35.602 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:35.602 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:35.602 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:35.602 NVMe Readv/Writev Request test 00:09:35.602 Attached to 0000:00:09.0 00:09:35.602 Attached to 0000:00:06.0 00:09:35.602 Attached to 0000:00:07.0 00:09:35.602 Attached to 0000:00:08.0 00:09:35.602 0000:00:06.0: build_io_request_2 test passed 00:09:35.602 0000:00:06.0: build_io_request_4 test passed 00:09:35.602 0000:00:06.0: build_io_request_5 test passed 00:09:35.602 0000:00:06.0: build_io_request_6 test passed 00:09:35.602 0000:00:06.0: build_io_request_7 test passed 00:09:35.602 0000:00:06.0: build_io_request_10 test passed 00:09:35.602 0000:00:07.0: build_io_request_2 test passed 00:09:35.602 0000:00:07.0: build_io_request_4 test passed 00:09:35.602 0000:00:07.0: build_io_request_5 test passed 00:09:35.602 0000:00:07.0: build_io_request_6 test passed 00:09:35.602 0000:00:07.0: build_io_request_7 test passed 00:09:35.602 0000:00:07.0: build_io_request_10 test passed 00:09:35.602 Cleaning up... 00:09:35.602 00:09:35.602 real 0m0.369s 00:09:35.602 user 0m0.225s 00:09:35.602 sys 0m0.099s 00:09:35.602 09:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.602 ************************************ 00:09:35.602 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.602 END TEST nvme_sgl 00:09:35.602 ************************************ 00:09:35.602 09:47:24 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:35.602 09:47:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:35.602 09:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.602 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.602 ************************************ 00:09:35.602 START TEST nvme_e2edp 00:09:35.602 ************************************ 00:09:35.602 09:47:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:35.860 NVMe Write/Read with End-to-End data protection test 00:09:35.860 Attached to 0000:00:09.0 00:09:35.860 Attached to 0000:00:06.0 00:09:35.860 Attached to 0000:00:07.0 00:09:35.860 Attached to 0000:00:08.0 00:09:35.860 Cleaning up... 00:09:35.860 00:09:35.860 real 0m0.182s 00:09:35.860 user 0m0.055s 00:09:35.860 sys 0m0.093s 00:09:35.860 09:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.860 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.860 ************************************ 00:09:35.860 END TEST nvme_e2edp 00:09:35.860 ************************************ 00:09:35.860 09:47:24 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:35.860 09:47:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:35.860 09:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.860 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:35.860 ************************************ 00:09:35.860 START TEST nvme_reserve 00:09:35.860 ************************************ 00:09:35.860 09:47:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:36.118 ===================================================== 00:09:36.118 NVMe Controller at PCI bus 0, device 9, function 0 00:09:36.118 ===================================================== 00:09:36.118 Reservations: Not Supported 00:09:36.118 ===================================================== 00:09:36.118 NVMe Controller at PCI bus 0, device 6, function 0 00:09:36.118 ===================================================== 00:09:36.118 Reservations: Not Supported 00:09:36.118 ===================================================== 00:09:36.118 NVMe Controller at PCI bus 0, device 7, function 0 00:09:36.118 ===================================================== 00:09:36.118 Reservations: Not Supported 00:09:36.118 ===================================================== 00:09:36.118 NVMe Controller at PCI bus 0, device 8, function 0 00:09:36.118 ===================================================== 00:09:36.118 Reservations: Not Supported 00:09:36.118 Reservation test passed 00:09:36.118 00:09:36.118 real 0m0.189s 00:09:36.118 user 0m0.056s 00:09:36.118 sys 0m0.090s 00:09:36.118 09:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.118 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 ************************************ 00:09:36.118 END TEST nvme_reserve 00:09:36.118 ************************************ 00:09:36.118 09:47:25 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:36.118 09:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.118 09:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.118 09:47:25 -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 ************************************ 00:09:36.118 START TEST nvme_err_injection 00:09:36.118 ************************************ 00:09:36.119 09:47:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:36.375 NVMe Error Injection test 00:09:36.375 Attached to 0000:00:09.0 00:09:36.375 Attached to 0000:00:06.0 00:09:36.375 Attached to 0000:00:07.0 00:09:36.375 Attached to 0000:00:08.0 00:09:36.375 0000:00:09.0: get features failed as expected 00:09:36.375 0000:00:06.0: get features failed as expected 00:09:36.375 0000:00:07.0: get features failed as expected 00:09:36.375 0000:00:08.0: get features failed as expected 00:09:36.375 0000:00:09.0: get features successfully as expected 00:09:36.375 0000:00:06.0: get features successfully as expected 00:09:36.375 0000:00:07.0: get features successfully as expected 00:09:36.375 0000:00:08.0: get features successfully as expected 00:09:36.375 0000:00:09.0: read failed as expected 00:09:36.375 0000:00:08.0: read failed as expected 00:09:36.375 0000:00:06.0: read failed as expected 00:09:36.375 0000:00:07.0: read failed as expected 00:09:36.375 0000:00:09.0: read successfully as expected 00:09:36.375 0000:00:06.0: read successfully as expected 00:09:36.375 0000:00:07.0: read successfully as expected 00:09:36.375 0000:00:08.0: read successfully as expected 00:09:36.375 Cleaning up... 00:09:36.375 00:09:36.375 real 0m0.250s 00:09:36.375 user 0m0.106s 00:09:36.375 sys 0m0.099s 00:09:36.375 09:47:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.375 09:47:25 -- common/autotest_common.sh@10 -- # set +x 00:09:36.375 ************************************ 00:09:36.375 END TEST nvme_err_injection 00:09:36.375 ************************************ 00:09:36.375 09:47:25 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:36.375 09:47:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:36.375 09:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.375 09:47:25 -- common/autotest_common.sh@10 -- # set +x 00:09:36.375 ************************************ 00:09:36.375 START TEST nvme_overhead 00:09:36.375 ************************************ 00:09:36.376 09:47:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:37.750 Initializing NVMe Controllers 00:09:37.750 Attached to 0000:00:09.0 00:09:37.750 Attached to 0000:00:06.0 00:09:37.750 Attached to 0000:00:07.0 00:09:37.750 Attached to 0000:00:08.0 00:09:37.750 Initialization complete. Launching workers. 00:09:37.750 submit (in ns) avg, min, max = 11201.6, 9985.4, 44305.4 00:09:37.750 complete (in ns) avg, min, max = 7561.7, 7135.4, 296279.2 00:09:37.750 00:09:37.750 Submit histogram 00:09:37.750 ================ 00:09:37.750 Range in us Cumulative Count 00:09:37.750 9.945 - 9.994: 0.0053% ( 1) 00:09:37.750 10.240 - 10.289: 0.0107% ( 1) 00:09:37.750 10.338 - 10.388: 0.0160% ( 1) 00:09:37.750 10.388 - 10.437: 0.0213% ( 1) 00:09:37.750 10.486 - 10.535: 0.0266% ( 1) 00:09:37.750 10.782 - 10.831: 0.0852% ( 11) 00:09:37.750 10.831 - 10.880: 0.7934% ( 133) 00:09:37.750 10.880 - 10.929: 4.8991% ( 771) 00:09:37.750 10.929 - 10.978: 18.0468% ( 2469) 00:09:37.750 10.978 - 11.028: 42.9895% ( 4684) 00:09:37.750 11.028 - 11.077: 68.1612% ( 4727) 00:09:37.750 11.077 - 11.126: 82.9384% ( 2775) 00:09:37.750 11.126 - 11.175: 89.7119% ( 1272) 00:09:37.750 11.175 - 11.225: 92.3372% ( 493) 00:09:37.750 11.225 - 11.274: 93.4342% ( 206) 00:09:37.750 11.274 - 11.323: 94.1956% ( 143) 00:09:37.750 11.323 - 11.372: 94.7654% ( 107) 00:09:37.750 11.372 - 11.422: 95.3831% ( 116) 00:09:37.750 11.422 - 11.471: 95.8997% ( 97) 00:09:37.750 11.471 - 11.520: 96.1500% ( 47) 00:09:37.750 11.520 - 11.569: 96.2831% ( 25) 00:09:37.750 11.569 - 11.618: 96.3523% ( 13) 00:09:37.750 11.618 - 11.668: 96.4109% ( 11) 00:09:37.750 11.668 - 11.717: 96.4375% ( 5) 00:09:37.750 11.717 - 11.766: 96.4588% ( 4) 00:09:37.750 11.766 - 11.815: 96.4695% ( 2) 00:09:37.750 11.815 - 11.865: 96.4854% ( 3) 00:09:37.750 11.865 - 11.914: 96.5014% ( 3) 00:09:37.750 11.914 - 11.963: 96.5121% ( 2) 00:09:37.750 11.963 - 12.012: 96.5653% ( 10) 00:09:37.750 12.012 - 12.062: 96.6132% ( 9) 00:09:37.750 12.062 - 12.111: 96.6878% ( 14) 00:09:37.750 12.111 - 12.160: 96.7890% ( 19) 00:09:37.750 12.160 - 12.209: 96.8582% ( 13) 00:09:37.750 12.209 - 12.258: 96.9114% ( 10) 00:09:37.750 12.258 - 12.308: 96.9594% ( 9) 00:09:37.750 12.308 - 12.357: 96.9860% ( 5) 00:09:37.750 12.357 - 12.406: 97.0126% ( 5) 00:09:37.750 12.406 - 12.455: 97.0339% ( 4) 00:09:37.750 12.455 - 12.505: 97.0552% ( 4) 00:09:37.750 12.505 - 12.554: 97.0765% ( 4) 00:09:37.750 12.603 - 12.702: 97.0925% ( 3) 00:09:37.750 12.702 - 12.800: 97.1085% ( 3) 00:09:37.750 12.800 - 12.898: 97.1724% ( 12) 00:09:37.750 12.898 - 12.997: 97.2895% ( 22) 00:09:37.750 12.997 - 13.095: 97.4599% ( 32) 00:09:37.750 13.095 - 13.194: 97.5771% ( 22) 00:09:37.750 13.194 - 13.292: 97.6303% ( 10) 00:09:37.750 13.292 - 13.391: 97.6783% ( 9) 00:09:37.750 13.391 - 13.489: 97.7741% ( 18) 00:09:37.750 13.489 - 13.588: 97.8327% ( 11) 00:09:37.750 13.588 - 13.686: 97.8540% ( 4) 00:09:37.750 13.686 - 13.785: 97.8859% ( 6) 00:09:37.750 13.785 - 13.883: 97.8966% ( 2) 00:09:37.750 13.883 - 13.982: 97.9072% ( 2) 00:09:37.750 13.982 - 14.080: 97.9232% ( 3) 00:09:37.750 14.080 - 14.178: 97.9339% ( 2) 00:09:37.750 14.178 - 14.277: 97.9392% ( 1) 00:09:37.750 14.277 - 14.375: 97.9605% ( 4) 00:09:37.750 14.375 - 14.474: 97.9978% ( 7) 00:09:37.750 14.474 - 14.572: 98.0191% ( 4) 00:09:37.750 14.572 - 14.671: 98.0350% ( 3) 00:09:37.750 14.671 - 14.769: 98.0723% ( 7) 00:09:37.750 14.769 - 14.868: 98.1149% ( 8) 00:09:37.750 14.868 - 14.966: 98.1682% ( 10) 00:09:37.750 14.966 - 15.065: 98.2054% ( 7) 00:09:37.750 15.065 - 15.163: 98.2534% ( 9) 00:09:37.750 15.163 - 15.262: 98.3066% ( 10) 00:09:37.750 15.262 - 15.360: 98.3758% ( 13) 00:09:37.750 15.360 - 15.458: 98.4131% ( 7) 00:09:37.750 15.458 - 15.557: 98.4664% ( 10) 00:09:37.750 15.557 - 15.655: 98.5143% ( 9) 00:09:37.750 15.655 - 15.754: 98.5356% ( 4) 00:09:37.750 15.754 - 15.852: 98.5622% ( 5) 00:09:37.750 15.852 - 15.951: 98.5835% ( 4) 00:09:37.750 15.951 - 16.049: 98.6101% ( 5) 00:09:37.750 16.049 - 16.148: 98.6368% ( 5) 00:09:37.750 16.148 - 16.246: 98.6581% ( 4) 00:09:37.750 16.246 - 16.345: 98.7433% ( 16) 00:09:37.750 16.345 - 16.443: 98.7806% ( 7) 00:09:37.750 16.443 - 16.542: 98.8977% ( 22) 00:09:37.750 16.542 - 16.640: 99.0468% ( 28) 00:09:37.750 16.640 - 16.738: 99.1320% ( 16) 00:09:37.750 16.738 - 16.837: 99.2279% ( 18) 00:09:37.750 16.837 - 16.935: 99.2811% ( 10) 00:09:37.750 16.935 - 17.034: 99.3663% ( 16) 00:09:37.750 17.034 - 17.132: 99.4675% ( 19) 00:09:37.750 17.132 - 17.231: 99.5154% ( 9) 00:09:37.750 17.231 - 17.329: 99.5740% ( 11) 00:09:37.750 17.329 - 17.428: 99.6006% ( 5) 00:09:37.750 17.428 - 17.526: 99.6379% ( 7) 00:09:37.750 17.526 - 17.625: 99.6539% ( 3) 00:09:37.750 17.625 - 17.723: 99.6752% ( 4) 00:09:37.750 17.723 - 17.822: 99.6911% ( 3) 00:09:37.750 17.920 - 18.018: 99.7018% ( 2) 00:09:37.750 18.018 - 18.117: 99.7071% ( 1) 00:09:37.750 18.117 - 18.215: 99.7231% ( 3) 00:09:37.750 18.314 - 18.412: 99.7337% ( 2) 00:09:37.750 18.412 - 18.511: 99.7444% ( 2) 00:09:37.750 18.905 - 19.003: 99.7497% ( 1) 00:09:37.750 19.200 - 19.298: 99.7657% ( 3) 00:09:37.750 19.298 - 19.397: 99.7817% ( 3) 00:09:37.750 19.397 - 19.495: 99.7870% ( 1) 00:09:37.750 19.495 - 19.594: 99.7923% ( 1) 00:09:37.750 19.692 - 19.791: 99.7976% ( 1) 00:09:37.750 19.791 - 19.889: 99.8136% ( 3) 00:09:37.750 19.889 - 19.988: 99.8189% ( 1) 00:09:37.750 19.988 - 20.086: 99.8243% ( 1) 00:09:37.750 20.480 - 20.578: 99.8296% ( 1) 00:09:37.750 20.677 - 20.775: 99.8402% ( 2) 00:09:37.750 20.775 - 20.874: 99.8456% ( 1) 00:09:37.750 20.874 - 20.972: 99.8509% ( 1) 00:09:37.750 21.071 - 21.169: 99.8562% ( 1) 00:09:37.750 21.268 - 21.366: 99.8775% ( 4) 00:09:37.750 21.563 - 21.662: 99.8828% ( 1) 00:09:37.750 22.055 - 22.154: 99.8882% ( 1) 00:09:37.750 22.252 - 22.351: 99.8935% ( 1) 00:09:37.750 22.548 - 22.646: 99.8988% ( 1) 00:09:37.750 22.646 - 22.745: 99.9095% ( 2) 00:09:37.750 23.138 - 23.237: 99.9148% ( 1) 00:09:37.750 24.714 - 24.812: 99.9201% ( 1) 00:09:37.750 24.812 - 24.911: 99.9308% ( 2) 00:09:37.750 25.108 - 25.206: 99.9361% ( 1) 00:09:37.750 25.797 - 25.994: 99.9467% ( 2) 00:09:37.751 26.191 - 26.388: 99.9521% ( 1) 00:09:37.751 28.751 - 28.948: 99.9627% ( 2) 00:09:37.751 28.948 - 29.145: 99.9680% ( 1) 00:09:37.751 30.917 - 31.114: 99.9734% ( 1) 00:09:37.751 31.508 - 31.705: 99.9787% ( 1) 00:09:37.751 37.612 - 37.809: 99.9840% ( 1) 00:09:37.751 38.203 - 38.400: 99.9893% ( 1) 00:09:37.751 41.354 - 41.551: 99.9947% ( 1) 00:09:37.751 44.111 - 44.308: 100.0000% ( 1) 00:09:37.751 00:09:37.751 Complete histogram 00:09:37.751 ================== 00:09:37.751 Range in us Cumulative Count 00:09:37.751 7.089 - 7.138: 0.0053% ( 1) 00:09:37.751 7.138 - 7.188: 0.0320% ( 5) 00:09:37.751 7.188 - 7.237: 1.0224% ( 186) 00:09:37.751 7.237 - 7.286: 8.6480% ( 1432) 00:09:37.751 7.286 - 7.335: 30.3531% ( 4076) 00:09:37.751 7.335 - 7.385: 57.8838% ( 5170) 00:09:37.751 7.385 - 7.434: 77.7943% ( 3739) 00:09:37.751 7.434 - 7.483: 89.0250% ( 2109) 00:09:37.751 7.483 - 7.532: 93.9294% ( 921) 00:09:37.751 7.532 - 7.582: 95.9103% ( 372) 00:09:37.751 7.582 - 7.631: 96.5973% ( 129) 00:09:37.751 7.631 - 7.680: 97.0392% ( 83) 00:09:37.751 7.680 - 7.729: 97.1724% ( 25) 00:09:37.751 7.729 - 7.778: 97.2469% ( 14) 00:09:37.751 7.778 - 7.828: 97.2895% ( 8) 00:09:37.751 7.828 - 7.877: 97.3215% ( 6) 00:09:37.751 7.877 - 7.926: 97.3588% ( 7) 00:09:37.751 7.926 - 7.975: 97.3907% ( 6) 00:09:37.751 7.975 - 8.025: 97.4227% ( 6) 00:09:37.751 8.025 - 8.074: 97.4546% ( 6) 00:09:37.751 8.074 - 8.123: 97.4706% ( 3) 00:09:37.751 8.123 - 8.172: 97.4972% ( 5) 00:09:37.751 8.172 - 8.222: 97.5132% ( 3) 00:09:37.751 8.222 - 8.271: 97.5292% ( 3) 00:09:37.751 8.271 - 8.320: 97.5451% ( 3) 00:09:37.751 8.320 - 8.369: 97.5558% ( 2) 00:09:37.751 8.369 - 8.418: 97.5611% ( 1) 00:09:37.751 8.418 - 8.468: 97.5718% ( 2) 00:09:37.751 8.517 - 8.566: 97.5824% ( 2) 00:09:37.751 8.566 - 8.615: 97.5984% ( 3) 00:09:37.751 8.615 - 8.665: 97.6037% ( 1) 00:09:37.751 8.665 - 8.714: 97.6090% ( 1) 00:09:37.751 8.714 - 8.763: 97.6144% ( 1) 00:09:37.751 8.763 - 8.812: 97.6197% ( 1) 00:09:37.751 8.812 - 8.862: 97.6250% ( 1) 00:09:37.751 8.862 - 8.911: 97.6357% ( 2) 00:09:37.751 9.452 - 9.502: 97.6410% ( 1) 00:09:37.751 9.600 - 9.649: 97.6463% ( 1) 00:09:37.751 9.846 - 9.895: 97.6516% ( 1) 00:09:37.751 9.895 - 9.945: 97.6623% ( 2) 00:09:37.751 9.945 - 9.994: 97.6676% ( 1) 00:09:37.751 9.994 - 10.043: 97.6729% ( 1) 00:09:37.751 10.092 - 10.142: 97.6836% ( 2) 00:09:37.751 10.142 - 10.191: 97.6889% ( 1) 00:09:37.751 10.191 - 10.240: 97.6996% ( 2) 00:09:37.751 10.240 - 10.289: 97.7049% ( 1) 00:09:37.751 10.289 - 10.338: 97.7262% ( 4) 00:09:37.751 10.338 - 10.388: 97.7368% ( 2) 00:09:37.751 10.388 - 10.437: 97.7528% ( 3) 00:09:37.751 10.437 - 10.486: 97.7581% ( 1) 00:09:37.751 10.486 - 10.535: 97.7688% ( 2) 00:09:37.751 10.535 - 10.585: 97.7741% ( 1) 00:09:37.751 10.585 - 10.634: 97.8061% ( 6) 00:09:37.751 10.634 - 10.683: 97.8274% ( 4) 00:09:37.751 10.683 - 10.732: 97.8327% ( 1) 00:09:37.751 10.732 - 10.782: 97.8433% ( 2) 00:09:37.751 10.782 - 10.831: 97.8593% ( 3) 00:09:37.751 10.831 - 10.880: 97.8753% ( 3) 00:09:37.751 10.880 - 10.929: 97.8913% ( 3) 00:09:37.751 10.929 - 10.978: 97.9285% ( 7) 00:09:37.751 10.978 - 11.028: 97.9605% ( 6) 00:09:37.751 11.028 - 11.077: 97.9818% ( 4) 00:09:37.751 11.126 - 11.175: 98.0031% ( 4) 00:09:37.751 11.175 - 11.225: 98.0191% ( 3) 00:09:37.751 11.225 - 11.274: 98.0563% ( 7) 00:09:37.751 11.274 - 11.323: 98.0830% ( 5) 00:09:37.751 11.323 - 11.372: 98.0936% ( 2) 00:09:37.751 11.372 - 11.422: 98.1149% ( 4) 00:09:37.751 11.520 - 11.569: 98.1202% ( 1) 00:09:37.751 11.569 - 11.618: 98.1256% ( 1) 00:09:37.751 11.618 - 11.668: 98.1362% ( 2) 00:09:37.751 11.668 - 11.717: 98.1469% ( 2) 00:09:37.751 11.717 - 11.766: 98.1628% ( 3) 00:09:37.751 11.766 - 11.815: 98.1735% ( 2) 00:09:37.751 11.815 - 11.865: 98.1841% ( 2) 00:09:37.751 11.865 - 11.914: 98.2001% ( 3) 00:09:37.751 11.963 - 12.012: 98.2108% ( 2) 00:09:37.751 12.012 - 12.062: 98.2214% ( 2) 00:09:37.751 12.160 - 12.209: 98.2321% ( 2) 00:09:37.751 12.357 - 12.406: 98.2374% ( 1) 00:09:37.751 12.406 - 12.455: 98.2534% ( 3) 00:09:37.751 12.505 - 12.554: 98.2693% ( 3) 00:09:37.751 12.554 - 12.603: 98.2747% ( 1) 00:09:37.751 12.603 - 12.702: 98.2960% ( 4) 00:09:37.751 12.702 - 12.800: 98.3758% ( 15) 00:09:37.751 12.800 - 12.898: 98.4770% ( 19) 00:09:37.751 12.898 - 12.997: 98.5782% ( 19) 00:09:37.751 12.997 - 13.095: 98.6900% ( 21) 00:09:37.751 13.095 - 13.194: 98.7646% ( 14) 00:09:37.751 13.194 - 13.292: 98.8711% ( 20) 00:09:37.751 13.292 - 13.391: 98.9669% ( 18) 00:09:37.751 13.391 - 13.489: 99.0628% ( 18) 00:09:37.751 13.489 - 13.588: 99.1640% ( 19) 00:09:37.751 13.588 - 13.686: 99.2651% ( 19) 00:09:37.751 13.686 - 13.785: 99.3237% ( 11) 00:09:37.751 13.785 - 13.883: 99.3929% ( 13) 00:09:37.751 13.883 - 13.982: 99.4142% ( 4) 00:09:37.751 13.982 - 14.080: 99.4409% ( 5) 00:09:37.751 14.080 - 14.178: 99.4941% ( 10) 00:09:37.751 14.178 - 14.277: 99.5420% ( 9) 00:09:37.751 14.277 - 14.375: 99.5633% ( 4) 00:09:37.751 14.375 - 14.474: 99.5846% ( 4) 00:09:37.751 14.474 - 14.572: 99.5900% ( 1) 00:09:37.751 14.572 - 14.671: 99.6059% ( 3) 00:09:37.751 14.671 - 14.769: 99.6113% ( 1) 00:09:37.751 14.868 - 14.966: 99.6166% ( 1) 00:09:37.751 15.065 - 15.163: 99.6272% ( 2) 00:09:37.751 15.163 - 15.262: 99.6432% ( 3) 00:09:37.751 15.262 - 15.360: 99.6485% ( 1) 00:09:37.751 15.458 - 15.557: 99.6645% ( 3) 00:09:37.751 15.557 - 15.655: 99.6698% ( 1) 00:09:37.751 15.655 - 15.754: 99.6752% ( 1) 00:09:37.751 15.754 - 15.852: 99.6805% ( 1) 00:09:37.751 15.852 - 15.951: 99.6965% ( 3) 00:09:37.751 15.951 - 16.049: 99.7018% ( 1) 00:09:37.751 16.049 - 16.148: 99.7284% ( 5) 00:09:37.751 16.148 - 16.246: 99.7391% ( 2) 00:09:37.751 16.443 - 16.542: 99.7444% ( 1) 00:09:37.751 16.542 - 16.640: 99.7657% ( 4) 00:09:37.751 16.738 - 16.837: 99.7710% ( 1) 00:09:37.751 16.837 - 16.935: 99.7763% ( 1) 00:09:37.751 16.935 - 17.034: 99.7923% ( 3) 00:09:37.751 17.034 - 17.132: 99.7976% ( 1) 00:09:37.751 17.132 - 17.231: 99.8136% ( 3) 00:09:37.751 17.231 - 17.329: 99.8189% ( 1) 00:09:37.751 17.526 - 17.625: 99.8349% ( 3) 00:09:37.751 17.625 - 17.723: 99.8402% ( 1) 00:09:37.751 17.723 - 17.822: 99.8562% ( 3) 00:09:37.751 17.822 - 17.920: 99.8669% ( 2) 00:09:37.751 18.215 - 18.314: 99.8775% ( 2) 00:09:37.751 18.609 - 18.708: 99.8882% ( 2) 00:09:37.751 19.003 - 19.102: 99.8935% ( 1) 00:09:37.751 19.102 - 19.200: 99.8988% ( 1) 00:09:37.751 19.495 - 19.594: 99.9041% ( 1) 00:09:37.751 20.382 - 20.480: 99.9095% ( 1) 00:09:37.751 20.578 - 20.677: 99.9148% ( 1) 00:09:37.751 20.677 - 20.775: 99.9201% ( 1) 00:09:37.751 20.775 - 20.874: 99.9254% ( 1) 00:09:37.751 21.268 - 21.366: 99.9308% ( 1) 00:09:37.751 21.760 - 21.858: 99.9414% ( 2) 00:09:37.751 21.957 - 22.055: 99.9467% ( 1) 00:09:37.751 24.911 - 25.009: 99.9521% ( 1) 00:09:37.751 26.585 - 26.782: 99.9574% ( 1) 00:09:37.751 33.280 - 33.477: 99.9627% ( 1) 00:09:37.751 34.658 - 34.855: 99.9680% ( 1) 00:09:37.751 37.415 - 37.612: 99.9734% ( 1) 00:09:37.751 38.400 - 38.597: 99.9787% ( 1) 00:09:37.751 42.142 - 42.338: 99.9840% ( 1) 00:09:37.751 143.360 - 144.148: 99.9893% ( 1) 00:09:37.751 237.883 - 239.458: 99.9947% ( 1) 00:09:37.751 296.172 - 297.748: 100.0000% ( 1) 00:09:37.751 00:09:37.751 00:09:37.751 real 0m1.218s 00:09:37.751 user 0m1.075s 00:09:37.751 sys 0m0.093s 00:09:37.751 09:47:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.751 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:09:37.751 ************************************ 00:09:37.751 END TEST nvme_overhead 00:09:37.751 ************************************ 00:09:37.751 09:47:26 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:37.751 09:47:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:37.751 09:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.751 09:47:26 -- common/autotest_common.sh@10 -- # set +x 00:09:37.751 ************************************ 00:09:37.751 START TEST nvme_arbitration 00:09:37.751 ************************************ 00:09:37.751 09:47:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:41.049 Initializing NVMe Controllers 00:09:41.049 Attached to 0000:00:09.0 00:09:41.049 Attached to 0000:00:06.0 00:09:41.049 Attached to 0000:00:07.0 00:09:41.049 Attached to 0000:00:08.0 00:09:41.049 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:41.049 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:41.049 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:41.049 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:41.049 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:41.049 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:41.049 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:41.049 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:41.049 Initialization complete. Launching workers. 00:09:41.049 Starting thread on core 1 with urgent priority queue 00:09:41.049 Starting thread on core 2 with urgent priority queue 00:09:41.049 Starting thread on core 3 with urgent priority queue 00:09:41.049 Starting thread on core 0 with urgent priority queue 00:09:41.049 QEMU NVMe Ctrl (12343 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:09:41.049 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:09:41.049 QEMU NVMe Ctrl (12340 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:41.049 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:41.049 QEMU NVMe Ctrl (12341 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:09:41.049 QEMU NVMe Ctrl (12342 ) core 3: 1002.67 IO/s 99.73 secs/100000 ios 00:09:41.049 ======================================================== 00:09:41.049 00:09:41.049 00:09:41.049 real 0m3.373s 00:09:41.049 user 0m9.386s 00:09:41.049 sys 0m0.112s 00:09:41.049 09:47:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.049 ************************************ 00:09:41.049 END TEST nvme_arbitration 00:09:41.049 ************************************ 00:09:41.049 09:47:29 -- common/autotest_common.sh@10 -- # set +x 00:09:41.049 09:47:29 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:41.049 09:47:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:41.049 09:47:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.049 09:47:29 -- common/autotest_common.sh@10 -- # set +x 00:09:41.049 ************************************ 00:09:41.049 START TEST nvme_single_aen 00:09:41.049 ************************************ 00:09:41.049 09:47:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:41.049 [2024-12-15 09:47:30.027614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.049 [2024-12-15 09:47:30.027686] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.311 [2024-12-15 09:47:30.160272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:41.311 [2024-12-15 09:47:30.162363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:41.311 [2024-12-15 09:47:30.163432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:41.311 [2024-12-15 09:47:30.164485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:41.311 Asynchronous Event Request test 00:09:41.311 Attached to 0000:00:09.0 00:09:41.311 Attached to 0000:00:06.0 00:09:41.311 Attached to 0000:00:07.0 00:09:41.311 Attached to 0000:00:08.0 00:09:41.311 Reset controller to setup AER completions for this process 00:09:41.311 Registering asynchronous event callbacks... 00:09:41.311 Getting orig temperature thresholds of all controllers 00:09:41.311 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.311 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.311 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.311 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:41.311 Setting all controllers temperature threshold low to trigger AER 00:09:41.311 Waiting for all controllers temperature threshold to be set lower 00:09:41.311 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.311 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:41.311 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.311 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:41.311 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.311 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:41.311 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:41.311 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:41.311 Waiting for all controllers to trigger AER and reset threshold 00:09:41.311 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.311 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.311 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.311 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.311 Cleaning up... 00:09:41.311 00:09:41.311 real 0m0.204s 00:09:41.311 user 0m0.068s 00:09:41.311 sys 0m0.092s 00:09:41.311 09:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.311 ************************************ 00:09:41.311 END TEST nvme_single_aen 00:09:41.311 ************************************ 00:09:41.311 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 09:47:30 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:41.311 09:47:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:41.311 09:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.311 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 ************************************ 00:09:41.311 START TEST nvme_doorbell_aers 00:09:41.311 ************************************ 00:09:41.311 09:47:30 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:41.311 09:47:30 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:41.311 09:47:30 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:41.311 09:47:30 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.311 09:47:30 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:41.311 09:47:30 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:41.311 09:47:30 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:41.311 09:47:30 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.311 09:47:30 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.311 09:47:30 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:41.311 09:47:30 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:41.311 09:47:30 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:41.311 09:47:30 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:41.311 09:47:30 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:41.573 [2024-12-15 09:47:30.511051] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:09:51.567 Executing: test_write_invalid_db 00:09:51.567 Waiting for AER completion... 00:09:51.567 Failure: test_write_invalid_db 00:09:51.567 00:09:51.567 Executing: test_invalid_db_write_overflow_sq 00:09:51.567 Waiting for AER completion... 00:09:51.567 Failure: test_invalid_db_write_overflow_sq 00:09:51.567 00:09:51.567 Executing: test_invalid_db_write_overflow_cq 00:09:51.567 Waiting for AER completion... 00:09:51.567 Failure: test_invalid_db_write_overflow_cq 00:09:51.567 00:09:51.567 09:47:40 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.567 09:47:40 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:51.567 [2024-12-15 09:47:40.538036] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:01.538 Executing: test_write_invalid_db 00:10:01.538 Waiting for AER completion... 00:10:01.538 Failure: test_write_invalid_db 00:10:01.538 00:10:01.538 Executing: test_invalid_db_write_overflow_sq 00:10:01.538 Waiting for AER completion... 00:10:01.538 Failure: test_invalid_db_write_overflow_sq 00:10:01.538 00:10:01.538 Executing: test_invalid_db_write_overflow_cq 00:10:01.538 Waiting for AER completion... 00:10:01.538 Failure: test_invalid_db_write_overflow_cq 00:10:01.538 00:10:01.538 09:47:50 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.538 09:47:50 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:01.796 [2024-12-15 09:47:50.573897] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:11.767 Executing: test_write_invalid_db 00:10:11.767 Waiting for AER completion... 00:10:11.767 Failure: test_write_invalid_db 00:10:11.767 00:10:11.767 Executing: test_invalid_db_write_overflow_sq 00:10:11.767 Waiting for AER completion... 00:10:11.767 Failure: test_invalid_db_write_overflow_sq 00:10:11.767 00:10:11.767 Executing: test_invalid_db_write_overflow_cq 00:10:11.767 Waiting for AER completion... 00:10:11.767 Failure: test_invalid_db_write_overflow_cq 00:10:11.767 00:10:11.767 09:48:00 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.767 09:48:00 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:11.767 [2024-12-15 09:48:00.576497] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 Executing: test_write_invalid_db 00:10:21.744 Waiting for AER completion... 00:10:21.744 Failure: test_write_invalid_db 00:10:21.744 00:10:21.744 Executing: test_invalid_db_write_overflow_sq 00:10:21.744 Waiting for AER completion... 00:10:21.744 Failure: test_invalid_db_write_overflow_sq 00:10:21.744 00:10:21.744 Executing: test_invalid_db_write_overflow_cq 00:10:21.744 Waiting for AER completion... 00:10:21.744 Failure: test_invalid_db_write_overflow_cq 00:10:21.744 00:10:21.744 00:10:21.744 real 0m40.179s 00:10:21.744 user 0m34.330s 00:10:21.744 sys 0m5.480s 00:10:21.744 09:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.744 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:10:21.744 ************************************ 00:10:21.744 END TEST nvme_doorbell_aers 00:10:21.744 ************************************ 00:10:21.744 09:48:10 -- nvme/nvme.sh@97 -- # uname 00:10:21.744 09:48:10 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:21.744 09:48:10 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:21.744 09:48:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:21.744 09:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.744 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:10:21.744 ************************************ 00:10:21.744 START TEST nvme_multi_aen 00:10:21.744 ************************************ 00:10:21.744 09:48:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:21.744 [2024-12-15 09:48:10.508228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.744 [2024-12-15 09:48:10.508314] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.744 [2024-12-15 09:48:10.648708] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:21.744 [2024-12-15 09:48:10.648759] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.648788] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.648800] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.650241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:21.744 [2024-12-15 09:48:10.650286] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.650307] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.650318] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.651310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:21.744 [2024-12-15 09:48:10.651334] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.651350] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.651361] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.652335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:21.744 [2024-12-15 09:48:10.652354] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.652368] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.652379] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63902) is not found. Dropping the request. 00:10:21.744 [2024-12-15 09:48:10.662029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.744 [2024-12-15 09:48:10.662202] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.744 Child process pid: 64417 00:10:22.002 [Child] Asynchronous Event Request test 00:10:22.002 [Child] Attached to 0000:00:09.0 00:10:22.002 [Child] Attached to 0000:00:06.0 00:10:22.002 [Child] Attached to 0000:00:07.0 00:10:22.002 [Child] Attached to 0000:00:08.0 00:10:22.002 [Child] Registering asynchronous event callbacks... 00:10:22.002 [Child] Getting orig temperature thresholds of all controllers 00:10:22.002 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.002 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:22.003 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 [Child] Cleaning up... 00:10:22.003 Asynchronous Event Request test 00:10:22.003 Attached to 0000:00:09.0 00:10:22.003 Attached to 0000:00:06.0 00:10:22.003 Attached to 0000:00:07.0 00:10:22.003 Attached to 0000:00:08.0 00:10:22.003 Reset controller to setup AER completions for this process 00:10:22.003 Registering asynchronous event callbacks... 00:10:22.003 Getting orig temperature thresholds of all controllers 00:10:22.003 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:22.003 Setting all controllers temperature threshold low to trigger AER 00:10:22.003 Waiting for all controllers temperature threshold to be set lower 00:10:22.003 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:22.003 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:22.003 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:22.003 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:22.003 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:22.003 Waiting for all controllers to trigger AER and reset threshold 00:10:22.003 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:22.003 Cleaning up... 00:10:22.003 00:10:22.003 real 0m0.427s 00:10:22.003 user 0m0.133s 00:10:22.003 sys 0m0.186s 00:10:22.003 09:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.003 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:10:22.003 ************************************ 00:10:22.003 END TEST nvme_multi_aen 00:10:22.003 ************************************ 00:10:22.003 09:48:10 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:22.003 09:48:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:22.003 09:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.003 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:10:22.003 ************************************ 00:10:22.003 START TEST nvme_startup 00:10:22.003 ************************************ 00:10:22.003 09:48:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:22.261 Initializing NVMe Controllers 00:10:22.262 Attached to 0000:00:09.0 00:10:22.262 Attached to 0000:00:06.0 00:10:22.262 Attached to 0000:00:07.0 00:10:22.262 Attached to 0000:00:08.0 00:10:22.262 Initialization complete. 00:10:22.262 Time used:121480.453 (us). 00:10:22.262 00:10:22.262 real 0m0.181s 00:10:22.262 user 0m0.056s 00:10:22.262 sys 0m0.089s 00:10:22.262 09:48:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.262 ************************************ 00:10:22.262 END TEST nvme_startup 00:10:22.262 ************************************ 00:10:22.262 09:48:11 -- common/autotest_common.sh@10 -- # set +x 00:10:22.262 09:48:11 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:22.262 09:48:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.262 09:48:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.262 09:48:11 -- common/autotest_common.sh@10 -- # set +x 00:10:22.262 ************************************ 00:10:22.262 START TEST nvme_multi_secondary 00:10:22.262 ************************************ 00:10:22.262 09:48:11 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:22.262 09:48:11 -- nvme/nvme.sh@52 -- # pid0=64473 00:10:22.262 09:48:11 -- nvme/nvme.sh@54 -- # pid1=64474 00:10:22.262 09:48:11 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:22.262 09:48:11 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:22.262 09:48:11 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:25.543 Initializing NVMe Controllers 00:10:25.543 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:25.543 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:25.543 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:25.543 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:25.543 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:25.543 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:25.543 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:25.543 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:25.543 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:25.543 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:25.543 Initialization complete. Launching workers. 00:10:25.543 ======================================================== 00:10:25.543 Latency(us) 00:10:25.543 Device Information : IOPS MiB/s Average min max 00:10:25.543 PCIE (0000:00:09.0) NSID 1 from core 1: 7902.43 30.87 2026.78 720.52 7380.44 00:10:25.543 PCIE (0000:00:06.0) NSID 1 from core 1: 7902.43 30.87 2025.96 708.37 7285.64 00:10:25.543 PCIE (0000:00:07.0) NSID 1 from core 1: 7902.43 30.87 2026.92 717.22 6494.92 00:10:25.543 PCIE (0000:00:08.0) NSID 1 from core 1: 7902.43 30.87 2026.86 712.37 6336.64 00:10:25.543 PCIE (0000:00:08.0) NSID 2 from core 1: 7902.43 30.87 2026.82 733.94 6340.42 00:10:25.543 PCIE (0000:00:08.0) NSID 3 from core 1: 7902.43 30.87 2026.77 721.16 6904.33 00:10:25.543 ======================================================== 00:10:25.543 Total : 47414.58 185.21 2026.69 708.37 7380.44 00:10:25.543 00:10:25.801 Initializing NVMe Controllers 00:10:25.801 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:25.801 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:25.801 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:25.801 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:25.801 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:25.801 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:25.801 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:25.801 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:25.801 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:25.801 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:25.801 Initialization complete. Launching workers. 00:10:25.801 ======================================================== 00:10:25.801 Latency(us) 00:10:25.801 Device Information : IOPS MiB/s Average min max 00:10:25.801 PCIE (0000:00:09.0) NSID 1 from core 2: 3282.61 12.82 4873.80 809.41 12894.68 00:10:25.801 PCIE (0000:00:06.0) NSID 1 from core 2: 3282.61 12.82 4872.85 788.28 12888.72 00:10:25.801 PCIE (0000:00:07.0) NSID 1 from core 2: 3282.61 12.82 4874.27 810.37 12406.20 00:10:25.801 PCIE (0000:00:08.0) NSID 1 from core 2: 3282.61 12.82 4881.04 788.33 12440.31 00:10:25.801 PCIE (0000:00:08.0) NSID 2 from core 2: 3282.61 12.82 4880.99 803.01 12970.78 00:10:25.801 PCIE (0000:00:08.0) NSID 3 from core 2: 3282.61 12.82 4881.13 810.42 12997.43 00:10:25.801 ======================================================== 00:10:25.801 Total : 19695.68 76.94 4877.35 788.28 12997.43 00:10:25.801 00:10:25.801 09:48:14 -- nvme/nvme.sh@56 -- # wait 64473 00:10:27.699 Initializing NVMe Controllers 00:10:27.699 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:27.699 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:27.699 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:27.699 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:27.699 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:27.699 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:27.699 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:27.699 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:27.699 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:27.699 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:27.699 Initialization complete. Launching workers. 00:10:27.699 ======================================================== 00:10:27.699 Latency(us) 00:10:27.699 Device Information : IOPS MiB/s Average min max 00:10:27.699 PCIE (0000:00:09.0) NSID 1 from core 0: 11151.77 43.56 1434.37 702.49 5795.46 00:10:27.699 PCIE (0000:00:06.0) NSID 1 from core 0: 11151.77 43.56 1433.50 684.51 5622.39 00:10:27.699 PCIE (0000:00:07.0) NSID 1 from core 0: 11151.77 43.56 1434.31 702.52 5926.65 00:10:27.699 PCIE (0000:00:08.0) NSID 1 from core 0: 11151.77 43.56 1434.29 592.37 5714.27 00:10:27.699 PCIE (0000:00:08.0) NSID 2 from core 0: 11151.77 43.56 1434.26 593.22 6124.91 00:10:27.699 PCIE (0000:00:08.0) NSID 3 from core 0: 11151.77 43.56 1434.24 581.35 5949.00 00:10:27.699 ======================================================== 00:10:27.699 Total : 66910.61 261.37 1434.16 581.35 6124.91 00:10:27.699 00:10:27.699 09:48:16 -- nvme/nvme.sh@57 -- # wait 64474 00:10:27.699 09:48:16 -- nvme/nvme.sh@61 -- # pid0=64543 00:10:27.699 09:48:16 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:27.699 09:48:16 -- nvme/nvme.sh@63 -- # pid1=64544 00:10:27.699 09:48:16 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:27.699 09:48:16 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:30.981 Initializing NVMe Controllers 00:10:30.981 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:30.981 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:30.981 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:30.981 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:30.981 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:30.981 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:30.981 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:30.981 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:30.981 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:30.981 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:30.981 Initialization complete. Launching workers. 00:10:30.981 ======================================================== 00:10:30.981 Latency(us) 00:10:30.981 Device Information : IOPS MiB/s Average min max 00:10:30.981 PCIE (0000:00:09.0) NSID 1 from core 1: 7674.65 29.98 2084.38 721.80 6891.17 00:10:30.981 PCIE (0000:00:06.0) NSID 1 from core 1: 7674.65 29.98 2083.50 706.97 6514.71 00:10:30.981 PCIE (0000:00:07.0) NSID 1 from core 1: 7674.65 29.98 2084.42 728.96 5726.66 00:10:30.981 PCIE (0000:00:08.0) NSID 1 from core 1: 7674.65 29.98 2084.39 727.04 5986.26 00:10:30.981 PCIE (0000:00:08.0) NSID 2 from core 1: 7674.65 29.98 2084.36 738.86 6302.80 00:10:30.981 PCIE (0000:00:08.0) NSID 3 from core 1: 7674.65 29.98 2084.34 734.06 7004.02 00:10:30.981 ======================================================== 00:10:30.981 Total : 46047.89 179.87 2084.23 706.97 7004.02 00:10:30.981 00:10:31.239 Initializing NVMe Controllers 00:10:31.239 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:31.239 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:31.239 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:31.239 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:31.239 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:31.239 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:31.239 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:31.239 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:31.239 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:31.239 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:31.240 Initialization complete. Launching workers. 00:10:31.240 ======================================================== 00:10:31.240 Latency(us) 00:10:31.240 Device Information : IOPS MiB/s Average min max 00:10:31.240 PCIE (0000:00:09.0) NSID 1 from core 0: 7524.65 29.39 2125.92 737.25 7033.47 00:10:31.240 PCIE (0000:00:06.0) NSID 1 from core 0: 7524.65 29.39 2124.89 706.34 6906.79 00:10:31.240 PCIE (0000:00:07.0) NSID 1 from core 0: 7524.65 29.39 2125.78 729.39 6715.85 00:10:31.240 PCIE (0000:00:08.0) NSID 1 from core 0: 7524.65 29.39 2125.71 622.30 6560.39 00:10:31.240 PCIE (0000:00:08.0) NSID 2 from core 0: 7524.65 29.39 2125.65 602.39 6629.95 00:10:31.240 PCIE (0000:00:08.0) NSID 3 from core 0: 7524.65 29.39 2125.59 577.79 6689.65 00:10:31.240 ======================================================== 00:10:31.240 Total : 45147.89 176.36 2125.59 577.79 7033.47 00:10:31.240 00:10:33.143 Initializing NVMe Controllers 00:10:33.143 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:33.143 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:33.143 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:33.143 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:33.143 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:33.143 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:33.143 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:33.143 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:33.143 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:33.143 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:33.143 Initialization complete. Launching workers. 00:10:33.143 ======================================================== 00:10:33.143 Latency(us) 00:10:33.143 Device Information : IOPS MiB/s Average min max 00:10:33.143 PCIE (0000:00:09.0) NSID 1 from core 2: 4674.58 18.26 3422.26 750.32 12681.86 00:10:33.143 PCIE (0000:00:06.0) NSID 1 from core 2: 4674.58 18.26 3420.95 719.89 16496.26 00:10:33.143 PCIE (0000:00:07.0) NSID 1 from core 2: 4674.58 18.26 3422.33 731.20 16513.82 00:10:33.143 PCIE (0000:00:08.0) NSID 1 from core 2: 4674.58 18.26 3422.11 680.14 12864.27 00:10:33.143 PCIE (0000:00:08.0) NSID 2 from core 2: 4674.58 18.26 3421.89 636.54 12629.56 00:10:33.143 PCIE (0000:00:08.0) NSID 3 from core 2: 4674.58 18.26 3422.02 619.13 12636.08 00:10:33.143 ======================================================== 00:10:33.144 Total : 28047.46 109.56 3421.93 619.13 16513.82 00:10:33.144 00:10:33.144 09:48:21 -- nvme/nvme.sh@65 -- # wait 64543 00:10:33.144 09:48:21 -- nvme/nvme.sh@66 -- # wait 64544 00:10:33.144 00:10:33.144 real 0m10.834s 00:10:33.144 user 0m18.686s 00:10:33.144 sys 0m0.630s 00:10:33.144 09:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:33.144 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:10:33.144 ************************************ 00:10:33.144 END TEST nvme_multi_secondary 00:10:33.144 ************************************ 00:10:33.144 09:48:22 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:33.144 09:48:22 -- nvme/nvme.sh@102 -- # kill_stub 00:10:33.144 09:48:22 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63485 ]] 00:10:33.144 09:48:22 -- common/autotest_common.sh@1076 -- # kill 63485 00:10:33.144 09:48:22 -- common/autotest_common.sh@1077 -- # wait 63485 00:10:34.086 [2024-12-15 09:48:22.894311] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.086 [2024-12-15 09:48:22.894360] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.086 [2024-12-15 09:48:22.894371] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.086 [2024-12-15 09:48:22.894386] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.658 [2024-12-15 09:48:23.411288] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.658 [2024-12-15 09:48:23.411343] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.658 [2024-12-15 09:48:23.411354] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:34.658 [2024-12-15 09:48:23.411364] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:35.601 [2024-12-15 09:48:24.422462] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:35.601 [2024-12-15 09:48:24.422511] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:35.601 [2024-12-15 09:48:24.422522] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:35.601 [2024-12-15 09:48:24.422533] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:36.989 [2024-12-15 09:48:25.931233] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:36.989 [2024-12-15 09:48:25.931384] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:36.989 [2024-12-15 09:48:25.931415] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:36.989 [2024-12-15 09:48:25.931451] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64416) is not found. Dropping the request. 00:10:37.250 09:48:26 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:37.250 09:48:26 -- common/autotest_common.sh@1083 -- # echo 2 00:10:37.250 09:48:26 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:37.250 09:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:37.250 09:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.250 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:10:37.250 ************************************ 00:10:37.250 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:37.250 ************************************ 00:10:37.250 09:48:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:37.250 * Looking for test storage... 00:10:37.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:37.250 09:48:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:37.250 09:48:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:37.250 09:48:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:37.512 09:48:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:37.512 09:48:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:37.512 09:48:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:37.512 09:48:26 -- scripts/common.sh@335 -- # IFS=.-: 00:10:37.512 09:48:26 -- scripts/common.sh@335 -- # read -ra ver1 00:10:37.512 09:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.512 09:48:26 -- scripts/common.sh@336 -- # read -ra ver2 00:10:37.512 09:48:26 -- scripts/common.sh@337 -- # local 'op=<' 00:10:37.512 09:48:26 -- scripts/common.sh@339 -- # ver1_l=2 00:10:37.512 09:48:26 -- scripts/common.sh@340 -- # ver2_l=1 00:10:37.512 09:48:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:37.512 09:48:26 -- scripts/common.sh@343 -- # case "$op" in 00:10:37.512 09:48:26 -- scripts/common.sh@344 -- # : 1 00:10:37.512 09:48:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:37.512 09:48:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.512 09:48:26 -- scripts/common.sh@364 -- # decimal 1 00:10:37.512 09:48:26 -- scripts/common.sh@352 -- # local d=1 00:10:37.512 09:48:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.512 09:48:26 -- scripts/common.sh@354 -- # echo 1 00:10:37.512 09:48:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:37.512 09:48:26 -- scripts/common.sh@365 -- # decimal 2 00:10:37.512 09:48:26 -- scripts/common.sh@352 -- # local d=2 00:10:37.512 09:48:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.512 09:48:26 -- scripts/common.sh@354 -- # echo 2 00:10:37.512 09:48:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:37.512 09:48:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:37.512 09:48:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:37.512 09:48:26 -- scripts/common.sh@367 -- # return 0 00:10:37.512 09:48:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.512 --rc genhtml_branch_coverage=1 00:10:37.512 --rc genhtml_function_coverage=1 00:10:37.512 --rc genhtml_legend=1 00:10:37.512 --rc geninfo_all_blocks=1 00:10:37.512 --rc geninfo_unexecuted_blocks=1 00:10:37.512 00:10:37.512 ' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.512 --rc genhtml_branch_coverage=1 00:10:37.512 --rc genhtml_function_coverage=1 00:10:37.512 --rc genhtml_legend=1 00:10:37.512 --rc geninfo_all_blocks=1 00:10:37.512 --rc geninfo_unexecuted_blocks=1 00:10:37.512 00:10:37.512 ' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.512 --rc genhtml_branch_coverage=1 00:10:37.512 --rc genhtml_function_coverage=1 00:10:37.512 --rc genhtml_legend=1 00:10:37.512 --rc geninfo_all_blocks=1 00:10:37.512 --rc geninfo_unexecuted_blocks=1 00:10:37.512 00:10:37.512 ' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:37.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.512 --rc genhtml_branch_coverage=1 00:10:37.512 --rc genhtml_function_coverage=1 00:10:37.512 --rc genhtml_legend=1 00:10:37.512 --rc geninfo_all_blocks=1 00:10:37.512 --rc geninfo_unexecuted_blocks=1 00:10:37.512 00:10:37.512 ' 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:37.512 09:48:26 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:37.512 09:48:26 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:37.512 09:48:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:37.512 09:48:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:37.512 09:48:26 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:37.512 09:48:26 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:37.512 09:48:26 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:37.512 09:48:26 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:37.512 09:48:26 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:37.512 09:48:26 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:37.512 09:48:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:37.512 09:48:26 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64745 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64745 00:10:37.512 09:48:26 -- common/autotest_common.sh@829 -- # '[' -z 64745 ']' 00:10:37.512 09:48:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.512 09:48:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.512 09:48:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.512 09:48:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.512 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:10:37.512 09:48:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:37.512 [2024-12-15 09:48:26.448164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.513 [2024-12-15 09:48:26.448395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64745 ] 00:10:37.773 [2024-12-15 09:48:26.616124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.032 [2024-12-15 09:48:26.865735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:38.032 [2024-12-15 09:48:26.866286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.032 [2024-12-15 09:48:26.866466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.032 [2024-12-15 09:48:26.866937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.032 [2024-12-15 09:48:26.867032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.966 09:48:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.966 09:48:27 -- common/autotest_common.sh@862 -- # return 0 00:10:38.966 09:48:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:38.966 09:48:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.966 09:48:27 -- common/autotest_common.sh@10 -- # set +x 00:10:39.224 nvme0n1 00:10:39.224 09:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_EMAR6.txt 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:39.224 09:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.224 09:48:28 -- common/autotest_common.sh@10 -- # set +x 00:10:39.224 true 00:10:39.224 09:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734256108 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64776 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:39.224 09:48:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:41.125 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:41.126 09:48:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.126 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:10:41.126 [2024-12-15 09:48:30.035998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:41.126 [2024-12-15 09:48:30.036321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:41.126 [2024-12-15 09:48:30.036347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:41.126 [2024-12-15 09:48:30.036359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.126 [2024-12-15 09:48:30.037894] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:41.126 09:48:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.126 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64776 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64776 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64776 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:41.126 09:48:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.126 09:48:30 -- common/autotest_common.sh@10 -- # set +x 00:10:41.126 09:48:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_EMAR6.txt 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_EMAR6.txt 00:10:41.126 09:48:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64745 00:10:41.126 09:48:30 -- common/autotest_common.sh@936 -- # '[' -z 64745 ']' 00:10:41.126 09:48:30 -- common/autotest_common.sh@940 -- # kill -0 64745 00:10:41.126 09:48:30 -- common/autotest_common.sh@941 -- # uname 00:10:41.126 09:48:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.126 09:48:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64745 00:10:41.383 09:48:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:41.383 09:48:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:41.383 killing process with pid 64745 00:10:41.383 09:48:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64745' 00:10:41.383 09:48:30 -- common/autotest_common.sh@955 -- # kill 64745 00:10:41.383 09:48:30 -- common/autotest_common.sh@960 -- # wait 64745 00:10:42.761 09:48:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:42.761 09:48:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:42.761 00:10:42.761 real 0m5.195s 00:10:42.761 user 0m18.020s 00:10:42.761 sys 0m0.593s 00:10:42.761 09:48:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.761 ************************************ 00:10:42.761 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:42.761 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:10:42.761 ************************************ 00:10:42.761 09:48:31 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:42.761 09:48:31 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:42.761 09:48:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:42.761 09:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.761 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:10:42.761 ************************************ 00:10:42.761 START TEST nvme_fio 00:10:42.761 ************************************ 00:10:42.761 09:48:31 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:42.761 09:48:31 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:42.761 09:48:31 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:42.761 09:48:31 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:42.761 09:48:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:42.761 09:48:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:42.761 09:48:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:42.762 09:48:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:42.762 09:48:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:42.762 09:48:31 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:42.762 09:48:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:42.762 09:48:31 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:42.762 09:48:31 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:42.762 09:48:31 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:42.762 09:48:31 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:42.762 09:48:31 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:42.762 09:48:31 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:42.762 09:48:31 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:43.023 09:48:31 -- nvme/nvme.sh@41 -- # bs=4096 00:10:43.023 09:48:31 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:43.023 09:48:31 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:43.023 09:48:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:43.023 09:48:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:43.023 09:48:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:43.023 09:48:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.023 09:48:31 -- common/autotest_common.sh@1330 -- # shift 00:10:43.023 09:48:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:43.023 09:48:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:43.023 09:48:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.023 09:48:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:43.023 09:48:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:43.023 09:48:31 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:43.023 09:48:31 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:43.023 09:48:31 -- common/autotest_common.sh@1336 -- # break 00:10:43.023 09:48:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:43.023 09:48:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:43.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:43.284 fio-3.35 00:10:43.284 Starting 1 thread 00:10:48.580 00:10:48.580 test: (groupid=0, jobs=1): err= 0: pid=64916: Sun Dec 15 09:48:37 2024 00:10:48.580 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:10:48.580 slat (usec): min=3, max=112, avg= 5.30, stdev= 2.70 00:10:48.580 clat (usec): min=459, max=9851, avg=3100.87, stdev=1137.76 00:10:48.580 lat (usec): min=464, max=9936, avg=3106.16, stdev=1139.06 00:10:48.580 clat percentiles (usec): 00:10:48.580 | 1.00th=[ 1696], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:48.580 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2638], 60.00th=[ 2835], 00:10:48.580 | 70.00th=[ 3228], 80.00th=[ 3752], 90.00th=[ 4948], 95.00th=[ 5669], 00:10:48.580 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7832], 99.95th=[ 7963], 00:10:48.580 | 99.99th=[ 9765] 00:10:48.580 bw ( KiB/s): min=81536, max=93256, per=100.00%, avg=85792.00, stdev=6485.22, samples=3 00:10:48.580 iops : min=20384, max=23314, avg=21448.00, stdev=1621.30, samples=3 00:10:48.580 write: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:10:48.580 slat (usec): min=3, max=177, avg= 5.51, stdev= 2.73 00:10:48.580 clat (usec): min=209, max=9769, avg=3129.33, stdev=1153.49 00:10:48.580 lat (usec): min=214, max=9788, avg=3134.83, stdev=1154.76 00:10:48.580 clat percentiles (usec): 00:10:48.580 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:48.580 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2671], 60.00th=[ 2868], 00:10:48.580 | 70.00th=[ 3261], 80.00th=[ 3818], 90.00th=[ 5014], 95.00th=[ 5735], 00:10:48.580 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7963], 00:10:48.581 | 99.99th=[ 8225] 00:10:48.581 bw ( KiB/s): min=81336, max=93104, per=100.00%, avg=85952.00, stdev=6280.52, samples=3 00:10:48.581 iops : min=20334, max=23276, avg=21488.00, stdev=1570.13, samples=3 00:10:48.581 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.07% 00:10:48.581 lat (msec) : 2=2.58%, 4=79.79%, 10=17.55% 00:10:48.581 cpu : usr=98.95%, sys=0.05%, ctx=4, majf=0, minf=609 00:10:48.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:48.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.581 issued rwts: total=40980,40883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.581 00:10:48.581 Run status group 0 (all jobs): 00:10:48.581 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:10:48.581 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (167MB), run=2001-2001msec 00:10:48.581 ----------------------------------------------------- 00:10:48.581 Suppressions used: 00:10:48.581 count bytes template 00:10:48.581 1 32 /usr/src/fio/parse.c 00:10:48.581 1 8 libtcmalloc_minimal.so 00:10:48.581 ----------------------------------------------------- 00:10:48.581 00:10:48.581 09:48:37 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:48.581 09:48:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:48.581 09:48:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:48.581 09:48:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:48.840 09:48:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:48.840 09:48:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:49.100 09:48:37 -- nvme/nvme.sh@41 -- # bs=4096 00:10:49.100 09:48:37 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:49.100 09:48:37 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:49.100 09:48:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:49.100 09:48:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:49.100 09:48:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:49.100 09:48:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.100 09:48:37 -- common/autotest_common.sh@1330 -- # shift 00:10:49.100 09:48:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:49.100 09:48:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:49.100 09:48:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.100 09:48:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:49.100 09:48:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:49.100 09:48:37 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:49.100 09:48:37 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:49.100 09:48:37 -- common/autotest_common.sh@1336 -- # break 00:10:49.100 09:48:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:49.100 09:48:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:49.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:49.100 fio-3.35 00:10:49.100 Starting 1 thread 00:10:55.798 00:10:55.798 test: (groupid=0, jobs=1): err= 0: pid=64999: Sun Dec 15 09:48:44 2024 00:10:55.798 read: IOPS=23.0k, BW=90.0MiB/s (94.3MB/s)(180MiB/2001msec) 00:10:55.798 slat (usec): min=4, max=141, avg= 5.11, stdev= 2.33 00:10:55.798 clat (usec): min=232, max=8396, avg=2775.66, stdev=892.66 00:10:55.798 lat (usec): min=237, max=8447, avg=2780.76, stdev=894.00 00:10:55.798 clat percentiles (usec): 00:10:55.798 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2311], 00:10:55.798 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:10:55.798 | 70.00th=[ 2638], 80.00th=[ 2999], 90.00th=[ 3851], 95.00th=[ 5080], 00:10:55.798 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7439], 00:10:55.798 | 99.99th=[ 8225] 00:10:55.798 bw ( KiB/s): min=88520, max=99584, per=100.00%, avg=94256.00, stdev=5543.27, samples=3 00:10:55.798 iops : min=22130, max=24896, avg=23564.00, stdev=1385.82, samples=3 00:10:55.798 write: IOPS=22.9k, BW=89.5MiB/s (93.8MB/s)(179MiB/2001msec); 0 zone resets 00:10:55.798 slat (usec): min=4, max=1172, avg= 5.38, stdev= 5.89 00:10:55.798 clat (usec): min=242, max=8270, avg=2774.35, stdev=879.31 00:10:55.798 lat (usec): min=248, max=8935, avg=2779.72, stdev=880.77 00:10:55.798 clat percentiles (usec): 00:10:55.798 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:10:55.798 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:10:55.798 | 70.00th=[ 2638], 80.00th=[ 2999], 90.00th=[ 3818], 95.00th=[ 5080], 00:10:55.798 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7439], 00:10:55.798 | 99.99th=[ 7963] 00:10:55.798 bw ( KiB/s): min=87928, max=98920, per=100.00%, avg=94253.33, stdev=5680.62, samples=3 00:10:55.798 iops : min=21982, max=24730, avg=23563.33, stdev=1420.15, samples=3 00:10:55.798 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:55.798 lat (msec) : 2=1.40%, 4=89.28%, 10=9.27% 00:10:55.798 cpu : usr=99.10%, sys=0.10%, ctx=4, majf=0, minf=608 00:10:55.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:55.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.798 issued rwts: total=46089,45823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.798 00:10:55.798 Run status group 0 (all jobs): 00:10:55.798 READ: bw=90.0MiB/s (94.3MB/s), 90.0MiB/s-90.0MiB/s (94.3MB/s-94.3MB/s), io=180MiB (189MB), run=2001-2001msec 00:10:55.798 WRITE: bw=89.5MiB/s (93.8MB/s), 89.5MiB/s-89.5MiB/s (93.8MB/s-93.8MB/s), io=179MiB (188MB), run=2001-2001msec 00:10:55.798 ----------------------------------------------------- 00:10:55.798 Suppressions used: 00:10:55.798 count bytes template 00:10:55.798 1 32 /usr/src/fio/parse.c 00:10:55.798 1 8 libtcmalloc_minimal.so 00:10:55.798 ----------------------------------------------------- 00:10:55.798 00:10:55.798 09:48:44 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:55.798 09:48:44 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:55.798 09:48:44 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:55.798 09:48:44 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:55.798 09:48:44 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:55.798 09:48:44 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:56.060 09:48:44 -- nvme/nvme.sh@41 -- # bs=4096 00:10:56.060 09:48:44 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.060 09:48:44 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.060 09:48:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:56.060 09:48:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:56.060 09:48:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:56.060 09:48:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.060 09:48:44 -- common/autotest_common.sh@1330 -- # shift 00:10:56.060 09:48:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:56.060 09:48:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:56.060 09:48:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:56.060 09:48:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:56.060 09:48:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:56.060 09:48:44 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:56.060 09:48:44 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:56.060 09:48:44 -- common/autotest_common.sh@1336 -- # break 00:10:56.060 09:48:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:56.060 09:48:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:56.060 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:56.060 fio-3.35 00:10:56.060 Starting 1 thread 00:11:01.352 00:11:01.352 test: (groupid=0, jobs=1): err= 0: pid=65083: Sun Dec 15 09:48:49 2024 00:11:01.352 read: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(125MiB/2001msec) 00:11:01.352 slat (nsec): min=4187, max=84824, avg=6126.66, stdev=3447.06 00:11:01.352 clat (usec): min=435, max=11126, avg=3976.43, stdev=1472.25 00:11:01.352 lat (usec): min=440, max=11136, avg=3982.56, stdev=1473.65 00:11:01.352 clat percentiles (usec): 00:11:01.352 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:11:01.352 | 30.00th=[ 2933], 40.00th=[ 3195], 50.00th=[ 3523], 60.00th=[ 3818], 00:11:01.352 | 70.00th=[ 4359], 80.00th=[ 5276], 90.00th=[ 6325], 95.00th=[ 6980], 00:11:01.352 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9372], 99.95th=[10290], 00:11:01.352 | 99.99th=[10683] 00:11:01.352 bw ( KiB/s): min=51544, max=77728, per=96.13%, avg=61298.67, stdev=14311.32, samples=3 00:11:01.352 iops : min=12886, max=19432, avg=15324.67, stdev=3577.83, samples=3 00:11:01.352 write: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec); 0 zone resets 00:11:01.352 slat (usec): min=4, max=100, avg= 6.33, stdev= 3.56 00:11:01.352 clat (usec): min=426, max=11084, avg=4014.43, stdev=1474.83 00:11:01.352 lat (usec): min=432, max=11091, avg=4020.75, stdev=1476.26 00:11:01.352 clat percentiles (usec): 00:11:01.352 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802], 00:11:01.352 | 30.00th=[ 2966], 40.00th=[ 3261], 50.00th=[ 3556], 60.00th=[ 3884], 00:11:01.352 | 70.00th=[ 4424], 80.00th=[ 5342], 90.00th=[ 6390], 95.00th=[ 7046], 00:11:01.352 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[ 9634], 99.95th=[10290], 00:11:01.352 | 99.99th=[10683] 00:11:01.352 bw ( KiB/s): min=51688, max=76280, per=95.27%, avg=60861.33, stdev=13432.97, samples=3 00:11:01.352 iops : min=12922, max=19070, avg=15215.33, stdev=3358.24, samples=3 00:11:01.352 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.02% 00:11:01.352 lat (msec) : 2=0.65%, 4=62.74%, 10=36.47%, 20=0.08% 00:11:01.352 cpu : usr=98.55%, sys=0.30%, ctx=3, majf=0, minf=608 00:11:01.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:01.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.352 issued rwts: total=31899,31957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.352 00:11:01.352 Run status group 0 (all jobs): 00:11:01.352 READ: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:01.352 WRITE: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:01.352 ----------------------------------------------------- 00:11:01.352 Suppressions used: 00:11:01.352 count bytes template 00:11:01.352 1 32 /usr/src/fio/parse.c 00:11:01.352 1 8 libtcmalloc_minimal.so 00:11:01.352 ----------------------------------------------------- 00:11:01.352 00:11:01.352 09:48:49 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:01.352 09:48:49 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:01.352 09:48:49 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:01.352 09:48:49 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:01.352 09:48:50 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:01.352 09:48:50 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:01.352 09:48:50 -- nvme/nvme.sh@41 -- # bs=4096 00:11:01.352 09:48:50 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:01.352 09:48:50 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:01.352 09:48:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:01.352 09:48:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:01.352 09:48:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:01.352 09:48:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.352 09:48:50 -- common/autotest_common.sh@1330 -- # shift 00:11:01.352 09:48:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:01.352 09:48:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:01.352 09:48:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.352 09:48:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:01.352 09:48:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:01.352 09:48:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:01.352 09:48:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:01.352 09:48:50 -- common/autotest_common.sh@1336 -- # break 00:11:01.352 09:48:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:01.352 09:48:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:01.614 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:01.614 fio-3.35 00:11:01.614 Starting 1 thread 00:11:08.195 00:11:08.195 test: (groupid=0, jobs=1): err= 0: pid=65156: Sun Dec 15 09:48:55 2024 00:11:08.195 read: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(103MiB/2001msec) 00:11:08.195 slat (usec): min=4, max=472, avg= 7.03, stdev= 5.08 00:11:08.195 clat (usec): min=919, max=10441, avg=4806.61, stdev=1471.92 00:11:08.195 lat (usec): min=924, max=10474, avg=4813.64, stdev=1473.10 00:11:08.195 clat percentiles (usec): 00:11:08.195 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3392], 00:11:08.195 | 30.00th=[ 3687], 40.00th=[ 4015], 50.00th=[ 4555], 60.00th=[ 5145], 00:11:08.195 | 70.00th=[ 5604], 80.00th=[ 6128], 90.00th=[ 6849], 95.00th=[ 7439], 00:11:08.195 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[ 9634], 99.95th=[10159], 00:11:08.195 | 99.99th=[10421] 00:11:08.195 bw ( KiB/s): min=51008, max=57432, per=100.00%, avg=54093.33, stdev=3219.48, samples=3 00:11:08.195 iops : min=12752, max=14358, avg=13523.33, stdev=804.87, samples=3 00:11:08.195 write: IOPS=13.2k, BW=51.6MiB/s (54.2MB/s)(103MiB/2001msec); 0 zone resets 00:11:08.195 slat (usec): min=4, max=1195, avg= 7.44, stdev= 8.52 00:11:08.195 clat (usec): min=941, max=10371, avg=4839.71, stdev=1465.86 00:11:08.195 lat (usec): min=947, max=10378, avg=4847.15, stdev=1467.13 00:11:08.195 clat percentiles (usec): 00:11:08.195 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3458], 00:11:08.195 | 30.00th=[ 3720], 40.00th=[ 4047], 50.00th=[ 4621], 60.00th=[ 5211], 00:11:08.195 | 70.00th=[ 5669], 80.00th=[ 6194], 90.00th=[ 6849], 95.00th=[ 7439], 00:11:08.195 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[ 9503], 99.95th=[ 9765], 00:11:08.195 | 99.99th=[10290] 00:11:08.195 bw ( KiB/s): min=51288, max=57120, per=100.00%, avg=54165.33, stdev=2916.77, samples=3 00:11:08.195 iops : min=12822, max=14280, avg=13541.33, stdev=729.19, samples=3 00:11:08.195 lat (usec) : 1000=0.02% 00:11:08.195 lat (msec) : 2=0.10%, 4=39.04%, 10=60.80%, 20=0.05% 00:11:08.195 cpu : usr=97.85%, sys=0.25%, ctx=17, majf=0, minf=606 00:11:08.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.195 issued rwts: total=26478,26455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.195 00:11:08.195 Run status group 0 (all jobs): 00:11:08.195 READ: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:08.195 WRITE: bw=51.6MiB/s (54.2MB/s), 51.6MiB/s-51.6MiB/s (54.2MB/s-54.2MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:08.195 ----------------------------------------------------- 00:11:08.195 Suppressions used: 00:11:08.195 count bytes template 00:11:08.195 1 32 /usr/src/fio/parse.c 00:11:08.195 1 8 libtcmalloc_minimal.so 00:11:08.195 ----------------------------------------------------- 00:11:08.195 00:11:08.195 09:48:56 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.195 09:48:56 -- nvme/nvme.sh@46 -- # true 00:11:08.195 00:11:08.195 real 0m24.780s 00:11:08.195 user 0m17.872s 00:11:08.195 sys 0m10.311s 00:11:08.195 09:48:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.195 ************************************ 00:11:08.195 END TEST nvme_fio 00:11:08.195 ************************************ 00:11:08.195 09:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:08.195 00:11:08.195 real 1m39.545s 00:11:08.195 user 3m42.788s 00:11:08.195 sys 0m20.620s 00:11:08.195 09:48:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.195 09:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:08.195 ************************************ 00:11:08.195 END TEST nvme 00:11:08.195 ************************************ 00:11:08.195 09:48:56 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:08.195 09:48:56 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:08.195 09:48:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.195 09:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.195 09:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:08.195 ************************************ 00:11:08.195 START TEST nvme_scc 00:11:08.195 ************************************ 00:11:08.195 09:48:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:08.195 * Looking for test storage... 00:11:08.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.195 09:48:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:08.195 09:48:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:08.195 09:48:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:08.195 09:48:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:08.195 09:48:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:08.195 09:48:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:08.195 09:48:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:08.195 09:48:56 -- scripts/common.sh@335 -- # IFS=.-: 00:11:08.195 09:48:56 -- scripts/common.sh@335 -- # read -ra ver1 00:11:08.195 09:48:56 -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.195 09:48:56 -- scripts/common.sh@336 -- # read -ra ver2 00:11:08.195 09:48:56 -- scripts/common.sh@337 -- # local 'op=<' 00:11:08.195 09:48:56 -- scripts/common.sh@339 -- # ver1_l=2 00:11:08.195 09:48:56 -- scripts/common.sh@340 -- # ver2_l=1 00:11:08.195 09:48:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:08.195 09:48:56 -- scripts/common.sh@343 -- # case "$op" in 00:11:08.195 09:48:56 -- scripts/common.sh@344 -- # : 1 00:11:08.195 09:48:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:08.195 09:48:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.195 09:48:56 -- scripts/common.sh@364 -- # decimal 1 00:11:08.195 09:48:56 -- scripts/common.sh@352 -- # local d=1 00:11:08.195 09:48:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.195 09:48:56 -- scripts/common.sh@354 -- # echo 1 00:11:08.195 09:48:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:08.195 09:48:56 -- scripts/common.sh@365 -- # decimal 2 00:11:08.195 09:48:56 -- scripts/common.sh@352 -- # local d=2 00:11:08.195 09:48:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.195 09:48:56 -- scripts/common.sh@354 -- # echo 2 00:11:08.195 09:48:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:08.195 09:48:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:08.195 09:48:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:08.195 09:48:56 -- scripts/common.sh@367 -- # return 0 00:11:08.195 09:48:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.195 09:48:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.195 --rc genhtml_branch_coverage=1 00:11:08.195 --rc genhtml_function_coverage=1 00:11:08.195 --rc genhtml_legend=1 00:11:08.196 --rc geninfo_all_blocks=1 00:11:08.196 --rc geninfo_unexecuted_blocks=1 00:11:08.196 00:11:08.196 ' 00:11:08.196 09:48:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:08.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.196 --rc genhtml_branch_coverage=1 00:11:08.196 --rc genhtml_function_coverage=1 00:11:08.196 --rc genhtml_legend=1 00:11:08.196 --rc geninfo_all_blocks=1 00:11:08.196 --rc geninfo_unexecuted_blocks=1 00:11:08.196 00:11:08.196 ' 00:11:08.196 09:48:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:08.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.196 --rc genhtml_branch_coverage=1 00:11:08.196 --rc genhtml_function_coverage=1 00:11:08.196 --rc genhtml_legend=1 00:11:08.196 --rc geninfo_all_blocks=1 00:11:08.196 --rc geninfo_unexecuted_blocks=1 00:11:08.196 00:11:08.196 ' 00:11:08.196 09:48:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:08.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.196 --rc genhtml_branch_coverage=1 00:11:08.196 --rc genhtml_function_coverage=1 00:11:08.196 --rc genhtml_legend=1 00:11:08.196 --rc geninfo_all_blocks=1 00:11:08.196 --rc geninfo_unexecuted_blocks=1 00:11:08.196 00:11:08.196 ' 00:11:08.196 09:48:56 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:08.196 09:48:56 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:08.196 09:48:56 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:08.196 09:48:56 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:08.196 09:48:56 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.196 09:48:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.196 09:48:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.196 09:48:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.196 09:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.196 09:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.196 09:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.196 09:48:56 -- paths/export.sh@5 -- # export PATH 00:11:08.196 09:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.196 09:48:56 -- nvme/functions.sh@10 -- # ctrls=() 00:11:08.196 09:48:56 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:08.196 09:48:56 -- nvme/functions.sh@11 -- # nvmes=() 00:11:08.196 09:48:56 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:08.196 09:48:56 -- nvme/functions.sh@12 -- # bdfs=() 00:11:08.196 09:48:56 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:08.196 09:48:56 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:08.196 09:48:56 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:08.196 09:48:56 -- nvme/functions.sh@14 -- # nvme_name= 00:11:08.196 09:48:56 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.196 09:48:56 -- nvme/nvme_scc.sh@12 -- # uname 00:11:08.196 09:48:56 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:08.196 09:48:56 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:08.196 09:48:56 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:08.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.196 Waiting for block devices as requested 00:11:08.196 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.196 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.196 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.196 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.524 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:13.524 09:49:02 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:13.524 09:49:02 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:13.524 09:49:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.524 09:49:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:13.524 09:49:02 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:13.524 09:49:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:13.524 09:49:02 -- scripts/common.sh@15 -- # local i 00:11:13.524 09:49:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:13.524 09:49:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:13.524 09:49:02 -- scripts/common.sh@24 -- # return 0 00:11:13.524 09:49:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:13.524 09:49:02 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:13.524 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:13.524 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.524 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.524 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:13.524 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.524 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.524 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:13.524 09:49:02 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.524 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.524 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.524 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.525 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:13.525 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:13.525 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.526 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:13.526 09:49:02 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:13.526 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:13.527 09:49:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:13.527 09:49:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:13.527 09:49:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:13.527 09:49:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:13.527 09:49:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.527 09:49:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:13.527 09:49:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:13.527 09:49:02 -- scripts/common.sh@15 -- # local i 00:11:13.527 09:49:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:13.527 09:49:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:13.527 09:49:02 -- scripts/common.sh@24 -- # return 0 00:11:13.527 09:49:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:13.527 09:49:02 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:13.527 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.527 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:13.527 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.527 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.527 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:13.528 09:49:02 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.528 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.528 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.529 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.529 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.529 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:13.530 09:49:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:13.530 09:49:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:13.530 09:49:02 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:13.530 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.530 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.530 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.530 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:13.530 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.531 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.531 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.531 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:13.532 09:49:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:13.532 09:49:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:13.532 09:49:02 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:13.532 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.532 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.532 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:13.532 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:13.532 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:13.533 09:49:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:13.533 09:49:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:13.533 09:49:02 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:13.533 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.533 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.533 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:13.533 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:13.533 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.534 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.534 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.534 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:13.535 09:49:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:13.535 09:49:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:13.535 09:49:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:13.535 09:49:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:13.535 09:49:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.535 09:49:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:13.535 09:49:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:13.535 09:49:02 -- scripts/common.sh@15 -- # local i 00:11:13.535 09:49:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:13.535 09:49:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:13.535 09:49:02 -- scripts/common.sh@24 -- # return 0 00:11:13.535 09:49:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:13.535 09:49:02 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:13.535 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.535 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.535 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.535 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.535 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:13.536 09:49:02 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.536 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.536 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:13.537 09:49:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:13.537 09:49:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:13.537 09:49:02 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:13.537 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.537 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.537 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.537 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:13.537 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.538 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:13.538 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.538 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:13.539 09:49:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:13.539 09:49:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:13.539 09:49:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:13.539 09:49:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:13.539 09:49:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.539 09:49:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:13.539 09:49:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:13.539 09:49:02 -- scripts/common.sh@15 -- # local i 00:11:13.539 09:49:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:13.539 09:49:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:13.539 09:49:02 -- scripts/common.sh@24 -- # return 0 00:11:13.539 09:49:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:13.539 09:49:02 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:13.539 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.539 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.539 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:13.539 09:49:02 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.539 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.540 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.540 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.540 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.541 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.541 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.541 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:13.542 09:49:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:13.542 09:49:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:13.542 09:49:02 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:13.542 09:49:02 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@18 -- # shift 00:11:13.542 09:49:02 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.542 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:13.542 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.542 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.543 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.543 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.543 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.544 09:49:02 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # IFS=: 00:11:13.544 09:49:02 -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.544 09:49:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:13.544 09:49:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:13.544 09:49:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:13.544 09:49:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:13.544 09:49:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:13.544 09:49:02 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:13.544 09:49:02 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:13.544 09:49:02 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:13.544 09:49:02 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:13.544 09:49:02 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:13.544 09:49:02 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:13.544 09:49:02 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:13.544 09:49:02 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:13.544 09:49:02 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:13.544 09:49:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:13.544 09:49:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:13.544 09:49:02 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:13.544 09:49:02 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:13.544 09:49:02 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:13.544 09:49:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:13.806 09:49:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # echo nvme1 00:11:13.806 09:49:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:13.806 09:49:02 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:13.806 09:49:02 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:13.806 09:49:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:13.806 09:49:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # echo nvme0 00:11:13.806 09:49:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:13.806 09:49:02 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:13.806 09:49:02 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:13.806 09:49:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:13.806 09:49:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # echo nvme3 00:11:13.806 09:49:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:13.806 09:49:02 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:13.806 09:49:02 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:13.806 09:49:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:13.806 09:49:02 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:13.806 09:49:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.806 09:49:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:13.806 09:49:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:13.806 09:49:02 -- nvme/functions.sh@197 -- # echo nvme2 00:11:13.806 09:49:02 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:13.806 09:49:02 -- nvme/functions.sh@206 -- # echo nvme1 00:11:13.806 09:49:02 -- nvme/functions.sh@207 -- # return 0 00:11:13.806 09:49:02 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:13.807 09:49:02 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:13.807 09:49:02 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.750 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.750 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.750 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.750 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.750 09:49:03 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:14.750 09:49:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:14.750 09:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.750 09:49:03 -- common/autotest_common.sh@10 -- # set +x 00:11:14.750 ************************************ 00:11:14.750 START TEST nvme_simple_copy 00:11:14.750 ************************************ 00:11:14.750 09:49:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:15.323 Initializing NVMe Controllers 00:11:15.323 Attaching to 0000:00:08.0 00:11:15.323 Controller supports SCC. Attached to 0000:00:08.0 00:11:15.323 Namespace ID: 1 size: 4GB 00:11:15.323 Initialization complete. 00:11:15.323 00:11:15.323 Controller QEMU NVMe Ctrl (12342 ) 00:11:15.323 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:15.323 Namespace Block Size:4096 00:11:15.323 Writing LBAs 0 to 63 with Random Data 00:11:15.323 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:15.323 LBAs matching Written Data: 64 00:11:15.323 ************************************ 00:11:15.323 END TEST nvme_simple_copy 00:11:15.323 ************************************ 00:11:15.323 00:11:15.323 real 0m0.288s 00:11:15.323 user 0m0.111s 00:11:15.323 sys 0m0.075s 00:11:15.323 09:49:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.323 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:15.323 ************************************ 00:11:15.323 END TEST nvme_scc 00:11:15.323 ************************************ 00:11:15.323 00:11:15.323 real 0m7.826s 00:11:15.323 user 0m1.098s 00:11:15.323 sys 0m1.501s 00:11:15.323 09:49:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.323 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:15.323 09:49:04 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:15.323 09:49:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:15.323 09:49:04 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:15.323 09:49:04 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:15.323 09:49:04 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:15.323 09:49:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.323 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:11:15.323 ************************************ 00:11:15.323 START TEST nvme_fdp 00:11:15.323 ************************************ 00:11:15.323 09:49:04 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:15.323 * Looking for test storage... 00:11:15.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:15.323 09:49:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:15.323 09:49:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:15.323 09:49:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:15.323 09:49:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:15.323 09:49:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:15.323 09:49:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:15.323 09:49:04 -- scripts/common.sh@335 -- # IFS=.-: 00:11:15.323 09:49:04 -- scripts/common.sh@335 -- # read -ra ver1 00:11:15.323 09:49:04 -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.323 09:49:04 -- scripts/common.sh@336 -- # read -ra ver2 00:11:15.323 09:49:04 -- scripts/common.sh@337 -- # local 'op=<' 00:11:15.323 09:49:04 -- scripts/common.sh@339 -- # ver1_l=2 00:11:15.323 09:49:04 -- scripts/common.sh@340 -- # ver2_l=1 00:11:15.323 09:49:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:15.323 09:49:04 -- scripts/common.sh@343 -- # case "$op" in 00:11:15.323 09:49:04 -- scripts/common.sh@344 -- # : 1 00:11:15.323 09:49:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:15.323 09:49:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.323 09:49:04 -- scripts/common.sh@364 -- # decimal 1 00:11:15.323 09:49:04 -- scripts/common.sh@352 -- # local d=1 00:11:15.323 09:49:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.323 09:49:04 -- scripts/common.sh@354 -- # echo 1 00:11:15.323 09:49:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:15.323 09:49:04 -- scripts/common.sh@365 -- # decimal 2 00:11:15.323 09:49:04 -- scripts/common.sh@352 -- # local d=2 00:11:15.323 09:49:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.323 09:49:04 -- scripts/common.sh@354 -- # echo 2 00:11:15.323 09:49:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:15.323 09:49:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:15.323 09:49:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:15.323 09:49:04 -- scripts/common.sh@367 -- # return 0 00:11:15.323 09:49:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:15.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.323 --rc genhtml_branch_coverage=1 00:11:15.323 --rc genhtml_function_coverage=1 00:11:15.323 --rc genhtml_legend=1 00:11:15.323 --rc geninfo_all_blocks=1 00:11:15.323 --rc geninfo_unexecuted_blocks=1 00:11:15.323 00:11:15.323 ' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:15.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.323 --rc genhtml_branch_coverage=1 00:11:15.323 --rc genhtml_function_coverage=1 00:11:15.323 --rc genhtml_legend=1 00:11:15.323 --rc geninfo_all_blocks=1 00:11:15.323 --rc geninfo_unexecuted_blocks=1 00:11:15.323 00:11:15.323 ' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:15.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.323 --rc genhtml_branch_coverage=1 00:11:15.323 --rc genhtml_function_coverage=1 00:11:15.323 --rc genhtml_legend=1 00:11:15.323 --rc geninfo_all_blocks=1 00:11:15.323 --rc geninfo_unexecuted_blocks=1 00:11:15.323 00:11:15.323 ' 00:11:15.323 09:49:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:15.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.323 --rc genhtml_branch_coverage=1 00:11:15.323 --rc genhtml_function_coverage=1 00:11:15.323 --rc genhtml_legend=1 00:11:15.323 --rc geninfo_all_blocks=1 00:11:15.323 --rc geninfo_unexecuted_blocks=1 00:11:15.323 00:11:15.323 ' 00:11:15.323 09:49:04 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:15.323 09:49:04 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:15.323 09:49:04 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:15.323 09:49:04 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:15.323 09:49:04 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.323 09:49:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.323 09:49:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.323 09:49:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.323 09:49:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.323 09:49:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.323 09:49:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.323 09:49:04 -- paths/export.sh@5 -- # export PATH 00:11:15.323 09:49:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.323 09:49:04 -- nvme/functions.sh@10 -- # ctrls=() 00:11:15.323 09:49:04 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:15.323 09:49:04 -- nvme/functions.sh@11 -- # nvmes=() 00:11:15.323 09:49:04 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:15.323 09:49:04 -- nvme/functions.sh@12 -- # bdfs=() 00:11:15.323 09:49:04 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:15.323 09:49:04 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:15.323 09:49:04 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:15.323 09:49:04 -- nvme/functions.sh@14 -- # nvme_name= 00:11:15.323 09:49:04 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.323 09:49:04 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:15.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.895 Waiting for block devices as requested 00:11:15.895 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.156 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.156 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.156 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:21.451 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:21.451 09:49:10 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:21.451 09:49:10 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:21.451 09:49:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.451 09:49:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:21.451 09:49:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:21.451 09:49:10 -- scripts/common.sh@15 -- # local i 00:11:21.451 09:49:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:21.451 09:49:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:21.451 09:49:10 -- scripts/common.sh@24 -- # return 0 00:11:21.451 09:49:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:21.451 09:49:10 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:21.451 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.451 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:21.451 09:49:10 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.451 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.451 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.452 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:21.452 09:49:10 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.452 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:21.453 09:49:10 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:21.453 09:49:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:21.453 09:49:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:21.453 09:49:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:21.453 09:49:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:21.453 09:49:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.453 09:49:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:21.453 09:49:10 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:21.453 09:49:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:21.453 09:49:10 -- scripts/common.sh@15 -- # local i 00:11:21.453 09:49:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:21.453 09:49:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:21.453 09:49:10 -- scripts/common.sh@24 -- # return 0 00:11:21.453 09:49:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:21.453 09:49:10 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:21.453 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:21.453 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.453 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:21.453 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.454 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:21.454 09:49:10 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:21.454 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.455 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:21.455 09:49:10 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:21.455 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:21.456 09:49:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:21.456 09:49:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:21.456 09:49:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:21.456 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.456 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.456 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:21.456 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:21.456 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.457 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.457 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.457 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:21.458 09:49:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:21.458 09:49:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:21.458 09:49:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:21.458 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.458 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.458 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:21.458 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:21.458 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:21.459 09:49:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:21.459 09:49:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:21.459 09:49:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:21.459 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.459 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.459 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:21.459 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:21.459 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.460 09:49:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.460 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.460 09:49:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:21.460 09:49:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:21.460 09:49:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:21.460 09:49:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:21.460 09:49:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:21.460 09:49:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.460 09:49:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:21.460 09:49:10 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:21.460 09:49:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:21.461 09:49:10 -- scripts/common.sh@15 -- # local i 00:11:21.461 09:49:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:21.461 09:49:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:21.461 09:49:10 -- scripts/common.sh@24 -- # return 0 00:11:21.461 09:49:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:21.461 09:49:10 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:21.461 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.461 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:21.461 09:49:10 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.461 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.461 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:21.462 09:49:10 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.462 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.462 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:21.463 09:49:10 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.463 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.463 09:49:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:21.728 09:49:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:21.728 09:49:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:21.728 09:49:10 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:21.728 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.728 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:21.728 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.728 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.728 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:21.729 09:49:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:21.729 09:49:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:21.729 09:49:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:21.729 09:49:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:21.729 09:49:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.729 09:49:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:21.729 09:49:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:21.729 09:49:10 -- scripts/common.sh@15 -- # local i 00:11:21.729 09:49:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:21.729 09:49:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:21.729 09:49:10 -- scripts/common.sh@24 -- # return 0 00:11:21.729 09:49:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:21.729 09:49:10 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:21.729 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.729 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.729 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.729 09:49:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:21.729 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:21.730 09:49:10 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.730 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.730 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:21.731 09:49:10 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.731 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.731 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:21.732 09:49:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:21.732 09:49:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:21.732 09:49:10 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:21.732 09:49:10 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@18 -- # shift 00:11:21.732 09:49:10 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.732 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:21.732 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.732 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.733 09:49:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:21.733 09:49:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.733 09:49:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:21.733 09:49:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:21.733 09:49:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:21.733 09:49:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:21.733 09:49:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:21.733 09:49:10 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:21.733 09:49:10 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:21.733 09:49:10 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:21.733 09:49:10 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:21.733 09:49:10 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:21.733 09:49:10 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:21.733 09:49:10 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:21.733 09:49:10 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:21.733 09:49:10 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:21.733 09:49:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:21.733 09:49:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:21.733 09:49:10 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:21.733 09:49:10 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:21.733 09:49:10 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:21.734 09:49:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:21.734 09:49:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:21.734 09:49:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:21.734 09:49:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:21.734 09:49:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:21.734 09:49:10 -- nvme/functions.sh@197 -- # echo nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:21.734 09:49:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:21.734 09:49:10 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:21.734 09:49:10 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:21.734 09:49:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:21.734 09:49:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:21.734 09:49:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:21.734 09:49:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:21.734 09:49:10 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:21.734 09:49:10 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:21.734 09:49:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:21.734 09:49:10 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:21.734 09:49:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.734 09:49:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:21.734 09:49:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:21.734 09:49:10 -- nvme/functions.sh@204 -- # trap - ERR 00:11:21.734 09:49:10 -- nvme/functions.sh@204 -- # print_backtrace 00:11:21.734 09:49:10 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:21.734 09:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:11:21.734 09:49:10 -- nvme/functions.sh@204 -- # trap - ERR 00:11:21.734 09:49:10 -- nvme/functions.sh@204 -- # print_backtrace 00:11:21.734 09:49:10 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:21.734 09:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:11:21.734 09:49:10 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:21.734 09:49:10 -- nvme/functions.sh@206 -- # echo nvme0 00:11:21.734 09:49:10 -- nvme/functions.sh@207 -- # return 0 00:11:21.734 09:49:10 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:21.734 09:49:10 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:21.734 09:49:10 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.678 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.678 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.970 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.970 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.970 09:49:11 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:22.970 09:49:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:22.970 09:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.970 09:49:11 -- common/autotest_common.sh@10 -- # set +x 00:11:22.970 ************************************ 00:11:22.970 START TEST nvme_flexible_data_placement 00:11:22.970 ************************************ 00:11:22.970 09:49:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:23.229 Initializing NVMe Controllers 00:11:23.229 Attaching to 0000:00:09.0 00:11:23.229 Controller supports FDP Attached to 0000:00:09.0 00:11:23.229 Namespace ID: 1 Endurance Group ID: 1 00:11:23.229 Initialization complete. 00:11:23.229 00:11:23.229 ================================== 00:11:23.229 == FDP tests for Namespace: #01 == 00:11:23.229 ================================== 00:11:23.229 00:11:23.229 Get Feature: FDP: 00:11:23.229 ================= 00:11:23.229 Enabled: Yes 00:11:23.229 FDP configuration Index: 0 00:11:23.229 00:11:23.229 FDP configurations log page 00:11:23.229 =========================== 00:11:23.229 Number of FDP configurations: 1 00:11:23.229 Version: 0 00:11:23.229 Size: 112 00:11:23.229 FDP Configuration Descriptor: 0 00:11:23.229 Descriptor Size: 96 00:11:23.229 Reclaim Group Identifier format: 2 00:11:23.229 FDP Volatile Write Cache: Not Present 00:11:23.229 FDP Configuration: Valid 00:11:23.229 Vendor Specific Size: 0 00:11:23.229 Number of Reclaim Groups: 2 00:11:23.229 Number of Recalim Unit Handles: 8 00:11:23.229 Max Placement Identifiers: 128 00:11:23.229 Number of Namespaces Suppprted: 256 00:11:23.229 Reclaim unit Nominal Size: 6000000 bytes 00:11:23.229 Estimated Reclaim Unit Time Limit: Not Reported 00:11:23.229 RUH Desc #000: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #001: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #002: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #003: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #004: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #005: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #006: RUH Type: Initially Isolated 00:11:23.229 RUH Desc #007: RUH Type: Initially Isolated 00:11:23.229 00:11:23.229 FDP reclaim unit handle usage log page 00:11:23.229 ====================================== 00:11:23.229 Number of Reclaim Unit Handles: 8 00:11:23.229 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:23.229 RUH Usage Desc #001: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #002: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #003: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #004: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #005: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #006: RUH Attributes: Unused 00:11:23.229 RUH Usage Desc #007: RUH Attributes: Unused 00:11:23.229 00:11:23.229 FDP statistics log page 00:11:23.229 ======================= 00:11:23.229 Host bytes with metadata written: 950460416 00:11:23.229 Media bytes with metadata written: 950624256 00:11:23.229 Media bytes erased: 0 00:11:23.229 00:11:23.229 FDP Reclaim unit handle status 00:11:23.229 ============================== 00:11:23.229 Number of RUHS descriptors: 2 00:11:23.229 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003592 00:11:23.229 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:23.229 00:11:23.229 FDP write on placement id: 0 success 00:11:23.229 00:11:23.229 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:23.229 00:11:23.229 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:23.229 00:11:23.229 Get Feature: FDP Events for Placement handle: #0 00:11:23.229 ======================== 00:11:23.229 Number of FDP Events: 6 00:11:23.229 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:23.229 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:23.229 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:23.229 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:23.229 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:23.229 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:23.229 00:11:23.229 FDP events log page 00:11:23.229 =================== 00:11:23.229 Number of FDP events: 1 00:11:23.229 FDP Event #0: 00:11:23.229 Event Type: RU Not Written to Capacity 00:11:23.229 Placement Identifier: Valid 00:11:23.229 NSID: Valid 00:11:23.229 Location: Valid 00:11:23.229 Placement Identifier: 0 00:11:23.229 Event Timestamp: 8 00:11:23.229 Namespace Identifier: 1 00:11:23.229 Reclaim Group Identifier: 0 00:11:23.229 Reclaim Unit Handle Identifier: 0 00:11:23.229 00:11:23.229 FDP test passed 00:11:23.229 00:11:23.229 real 0m0.231s 00:11:23.229 user 0m0.060s 00:11:23.229 sys 0m0.070s 00:11:23.229 09:49:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.229 ************************************ 00:11:23.229 09:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.229 END TEST nvme_flexible_data_placement 00:11:23.229 ************************************ 00:11:23.229 ************************************ 00:11:23.229 END TEST nvme_fdp 00:11:23.229 ************************************ 00:11:23.229 00:11:23.229 real 0m7.939s 00:11:23.229 user 0m1.044s 00:11:23.229 sys 0m1.658s 00:11:23.229 09:49:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.229 09:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.229 09:49:12 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:23.229 09:49:12 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:23.229 09:49:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.229 09:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.229 09:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.229 ************************************ 00:11:23.229 START TEST nvme_rpc 00:11:23.229 ************************************ 00:11:23.229 09:49:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:23.229 * Looking for test storage... 00:11:23.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:23.229 09:49:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:23.229 09:49:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:23.229 09:49:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:23.489 09:49:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:23.489 09:49:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:23.489 09:49:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:23.489 09:49:12 -- scripts/common.sh@335 -- # IFS=.-: 00:11:23.489 09:49:12 -- scripts/common.sh@335 -- # read -ra ver1 00:11:23.489 09:49:12 -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.489 09:49:12 -- scripts/common.sh@336 -- # read -ra ver2 00:11:23.489 09:49:12 -- scripts/common.sh@337 -- # local 'op=<' 00:11:23.489 09:49:12 -- scripts/common.sh@339 -- # ver1_l=2 00:11:23.489 09:49:12 -- scripts/common.sh@340 -- # ver2_l=1 00:11:23.489 09:49:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:23.489 09:49:12 -- scripts/common.sh@343 -- # case "$op" in 00:11:23.489 09:49:12 -- scripts/common.sh@344 -- # : 1 00:11:23.489 09:49:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:23.489 09:49:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.489 09:49:12 -- scripts/common.sh@364 -- # decimal 1 00:11:23.489 09:49:12 -- scripts/common.sh@352 -- # local d=1 00:11:23.489 09:49:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.489 09:49:12 -- scripts/common.sh@354 -- # echo 1 00:11:23.489 09:49:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:23.489 09:49:12 -- scripts/common.sh@365 -- # decimal 2 00:11:23.489 09:49:12 -- scripts/common.sh@352 -- # local d=2 00:11:23.489 09:49:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.489 09:49:12 -- scripts/common.sh@354 -- # echo 2 00:11:23.489 09:49:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:23.489 09:49:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:23.489 09:49:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:23.489 09:49:12 -- scripts/common.sh@367 -- # return 0 00:11:23.489 09:49:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.489 --rc genhtml_branch_coverage=1 00:11:23.489 --rc genhtml_function_coverage=1 00:11:23.489 --rc genhtml_legend=1 00:11:23.489 --rc geninfo_all_blocks=1 00:11:23.489 --rc geninfo_unexecuted_blocks=1 00:11:23.489 00:11:23.489 ' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.489 --rc genhtml_branch_coverage=1 00:11:23.489 --rc genhtml_function_coverage=1 00:11:23.489 --rc genhtml_legend=1 00:11:23.489 --rc geninfo_all_blocks=1 00:11:23.489 --rc geninfo_unexecuted_blocks=1 00:11:23.489 00:11:23.489 ' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.489 --rc genhtml_branch_coverage=1 00:11:23.489 --rc genhtml_function_coverage=1 00:11:23.489 --rc genhtml_legend=1 00:11:23.489 --rc geninfo_all_blocks=1 00:11:23.489 --rc geninfo_unexecuted_blocks=1 00:11:23.489 00:11:23.489 ' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.489 --rc genhtml_branch_coverage=1 00:11:23.489 --rc genhtml_function_coverage=1 00:11:23.489 --rc genhtml_legend=1 00:11:23.489 --rc geninfo_all_blocks=1 00:11:23.489 --rc geninfo_unexecuted_blocks=1 00:11:23.489 00:11:23.489 ' 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:23.489 09:49:12 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:23.489 09:49:12 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:23.489 09:49:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:23.489 09:49:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:23.489 09:49:12 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:23.489 09:49:12 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:23.489 09:49:12 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:23.489 09:49:12 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:23.489 09:49:12 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:23.489 09:49:12 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:23.489 09:49:12 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:23.489 09:49:12 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66578 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:23.489 09:49:12 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66578 00:11:23.489 09:49:12 -- common/autotest_common.sh@829 -- # '[' -z 66578 ']' 00:11:23.489 09:49:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.489 09:49:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.489 09:49:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.489 09:49:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.489 09:49:12 -- common/autotest_common.sh@10 -- # set +x 00:11:23.489 [2024-12-15 09:49:12.431321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.489 [2024-12-15 09:49:12.431566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66578 ] 00:11:23.747 [2024-12-15 09:49:12.590769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.006 [2024-12-15 09:49:12.766152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:24.006 [2024-12-15 09:49:12.766618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.006 [2024-12-15 09:49:12.766777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.941 09:49:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.941 09:49:13 -- common/autotest_common.sh@862 -- # return 0 00:11:24.941 09:49:13 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:25.199 Nvme0n1 00:11:25.199 09:49:14 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:25.199 09:49:14 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:25.460 request: 00:11:25.460 { 00:11:25.460 "filename": "non_existing_file", 00:11:25.460 "bdev_name": "Nvme0n1", 00:11:25.460 "method": "bdev_nvme_apply_firmware", 00:11:25.460 "req_id": 1 00:11:25.460 } 00:11:25.460 Got JSON-RPC error response 00:11:25.460 response: 00:11:25.460 { 00:11:25.460 "code": -32603, 00:11:25.460 "message": "open file failed." 00:11:25.460 } 00:11:25.460 09:49:14 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:25.460 09:49:14 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:25.460 09:49:14 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:25.722 09:49:14 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:25.722 09:49:14 -- nvme/nvme_rpc.sh@40 -- # killprocess 66578 00:11:25.722 09:49:14 -- common/autotest_common.sh@936 -- # '[' -z 66578 ']' 00:11:25.722 09:49:14 -- common/autotest_common.sh@940 -- # kill -0 66578 00:11:25.722 09:49:14 -- common/autotest_common.sh@941 -- # uname 00:11:25.722 09:49:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.722 09:49:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66578 00:11:25.722 killing process with pid 66578 00:11:25.722 09:49:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:25.722 09:49:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:25.722 09:49:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66578' 00:11:25.722 09:49:14 -- common/autotest_common.sh@955 -- # kill 66578 00:11:25.722 09:49:14 -- common/autotest_common.sh@960 -- # wait 66578 00:11:27.099 ************************************ 00:11:27.099 END TEST nvme_rpc 00:11:27.099 ************************************ 00:11:27.099 00:11:27.099 real 0m3.825s 00:11:27.099 user 0m7.292s 00:11:27.099 sys 0m0.513s 00:11:27.099 09:49:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.099 09:49:15 -- common/autotest_common.sh@10 -- # set +x 00:11:27.099 09:49:16 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:27.099 09:49:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.099 09:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.099 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:11:27.099 ************************************ 00:11:27.099 START TEST nvme_rpc_timeouts 00:11:27.099 ************************************ 00:11:27.099 09:49:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:27.099 * Looking for test storage... 00:11:27.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:27.099 09:49:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:27.099 09:49:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:27.099 09:49:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:27.360 09:49:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:27.360 09:49:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:27.360 09:49:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:27.360 09:49:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:27.360 09:49:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:27.360 09:49:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:27.360 09:49:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.360 09:49:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:27.360 09:49:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:27.360 09:49:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:27.360 09:49:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:27.360 09:49:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:27.360 09:49:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:27.360 09:49:16 -- scripts/common.sh@344 -- # : 1 00:11:27.360 09:49:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:27.360 09:49:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.360 09:49:16 -- scripts/common.sh@364 -- # decimal 1 00:11:27.360 09:49:16 -- scripts/common.sh@352 -- # local d=1 00:11:27.360 09:49:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.360 09:49:16 -- scripts/common.sh@354 -- # echo 1 00:11:27.360 09:49:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:27.360 09:49:16 -- scripts/common.sh@365 -- # decimal 2 00:11:27.360 09:49:16 -- scripts/common.sh@352 -- # local d=2 00:11:27.360 09:49:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.360 09:49:16 -- scripts/common.sh@354 -- # echo 2 00:11:27.360 09:49:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:27.360 09:49:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:27.360 09:49:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:27.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.360 09:49:16 -- scripts/common.sh@367 -- # return 0 00:11:27.360 09:49:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.360 09:49:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.360 --rc genhtml_branch_coverage=1 00:11:27.360 --rc genhtml_function_coverage=1 00:11:27.360 --rc genhtml_legend=1 00:11:27.360 --rc geninfo_all_blocks=1 00:11:27.360 --rc geninfo_unexecuted_blocks=1 00:11:27.360 00:11:27.360 ' 00:11:27.361 09:49:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.361 --rc genhtml_branch_coverage=1 00:11:27.361 --rc genhtml_function_coverage=1 00:11:27.361 --rc genhtml_legend=1 00:11:27.361 --rc geninfo_all_blocks=1 00:11:27.361 --rc geninfo_unexecuted_blocks=1 00:11:27.361 00:11:27.361 ' 00:11:27.361 09:49:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.361 --rc genhtml_branch_coverage=1 00:11:27.361 --rc genhtml_function_coverage=1 00:11:27.361 --rc genhtml_legend=1 00:11:27.361 --rc geninfo_all_blocks=1 00:11:27.361 --rc geninfo_unexecuted_blocks=1 00:11:27.361 00:11:27.361 ' 00:11:27.361 09:49:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.361 --rc genhtml_branch_coverage=1 00:11:27.361 --rc genhtml_function_coverage=1 00:11:27.361 --rc genhtml_legend=1 00:11:27.361 --rc geninfo_all_blocks=1 00:11:27.361 --rc geninfo_unexecuted_blocks=1 00:11:27.361 00:11:27.361 ' 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66645 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66645 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66676 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66676 00:11:27.361 09:49:16 -- common/autotest_common.sh@829 -- # '[' -z 66676 ']' 00:11:27.361 09:49:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.361 09:49:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.361 09:49:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.361 09:49:16 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:27.361 09:49:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.361 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:11:27.361 [2024-12-15 09:49:16.264536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:27.361 [2024-12-15 09:49:16.264877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66676 ] 00:11:27.620 [2024-12-15 09:49:16.418263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:27.620 [2024-12-15 09:49:16.572343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:27.620 [2024-12-15 09:49:16.572712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.620 [2024-12-15 09:49:16.572887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.187 09:49:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.187 09:49:17 -- common/autotest_common.sh@862 -- # return 0 00:11:28.187 09:49:17 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:28.187 Checking default timeout settings: 00:11:28.187 09:49:17 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:28.445 09:49:17 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:28.445 Making settings changes with rpc: 00:11:28.445 09:49:17 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:28.704 09:49:17 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:28.704 Check default vs. modified settings: 00:11:28.704 09:49:17 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 Setting action_on_timeout is changed as expected. 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.962 Setting timeout_us is changed as expected. 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66645 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:28.962 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:28.962 Setting timeout_admin_us is changed as expected. 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66645 /tmp/settings_modified_66645 00:11:28.963 09:49:17 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66676 00:11:28.963 09:49:17 -- common/autotest_common.sh@936 -- # '[' -z 66676 ']' 00:11:28.963 09:49:17 -- common/autotest_common.sh@940 -- # kill -0 66676 00:11:28.963 09:49:17 -- common/autotest_common.sh@941 -- # uname 00:11:28.963 09:49:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.963 09:49:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66676 00:11:28.963 killing process with pid 66676 00:11:28.963 09:49:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.963 09:49:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.963 09:49:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66676' 00:11:28.963 09:49:17 -- common/autotest_common.sh@955 -- # kill 66676 00:11:28.963 09:49:17 -- common/autotest_common.sh@960 -- # wait 66676 00:11:30.342 RPC TIMEOUT SETTING TEST PASSED. 00:11:30.342 09:49:19 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:30.342 00:11:30.342 real 0m3.035s 00:11:30.342 user 0m5.688s 00:11:30.342 sys 0m0.502s 00:11:30.342 ************************************ 00:11:30.342 END TEST nvme_rpc_timeouts 00:11:30.342 ************************************ 00:11:30.342 09:49:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.342 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:30.342 09:49:19 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:30.342 09:49:19 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:30.342 09:49:19 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:30.342 09:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.342 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:30.342 ************************************ 00:11:30.342 START TEST nvme_xnvme 00:11:30.342 ************************************ 00:11:30.342 09:49:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:30.342 * Looking for test storage... 00:11:30.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:30.342 09:49:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.342 09:49:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.342 09:49:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.342 09:49:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.342 09:49:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.342 09:49:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.342 09:49:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.342 09:49:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.342 09:49:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.342 09:49:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.342 09:49:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.342 09:49:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.342 09:49:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.342 09:49:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.342 09:49:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.342 09:49:19 -- scripts/common.sh@344 -- # : 1 00:11:30.342 09:49:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.342 09:49:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.342 09:49:19 -- scripts/common.sh@364 -- # decimal 1 00:11:30.342 09:49:19 -- scripts/common.sh@352 -- # local d=1 00:11:30.342 09:49:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.342 09:49:19 -- scripts/common.sh@354 -- # echo 1 00:11:30.342 09:49:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.342 09:49:19 -- scripts/common.sh@365 -- # decimal 2 00:11:30.342 09:49:19 -- scripts/common.sh@352 -- # local d=2 00:11:30.342 09:49:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.342 09:49:19 -- scripts/common.sh@354 -- # echo 2 00:11:30.342 09:49:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.342 09:49:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.342 09:49:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.342 09:49:19 -- scripts/common.sh@367 -- # return 0 00:11:30.342 09:49:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.342 --rc genhtml_branch_coverage=1 00:11:30.342 --rc genhtml_function_coverage=1 00:11:30.342 --rc genhtml_legend=1 00:11:30.342 --rc geninfo_all_blocks=1 00:11:30.342 --rc geninfo_unexecuted_blocks=1 00:11:30.342 00:11:30.342 ' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.342 --rc genhtml_branch_coverage=1 00:11:30.342 --rc genhtml_function_coverage=1 00:11:30.342 --rc genhtml_legend=1 00:11:30.342 --rc geninfo_all_blocks=1 00:11:30.342 --rc geninfo_unexecuted_blocks=1 00:11:30.342 00:11:30.342 ' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.342 --rc genhtml_branch_coverage=1 00:11:30.342 --rc genhtml_function_coverage=1 00:11:30.342 --rc genhtml_legend=1 00:11:30.342 --rc geninfo_all_blocks=1 00:11:30.342 --rc geninfo_unexecuted_blocks=1 00:11:30.342 00:11:30.342 ' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.342 --rc genhtml_branch_coverage=1 00:11:30.342 --rc genhtml_function_coverage=1 00:11:30.342 --rc genhtml_legend=1 00:11:30.342 --rc geninfo_all_blocks=1 00:11:30.342 --rc geninfo_unexecuted_blocks=1 00:11:30.342 00:11:30.342 ' 00:11:30.342 09:49:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.342 09:49:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.342 09:49:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.342 09:49:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.342 09:49:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.342 09:49:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.342 09:49:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.342 09:49:19 -- paths/export.sh@5 -- # export PATH 00:11:30.342 09:49:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:30.342 09:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.342 09:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.342 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:30.342 ************************************ 00:11:30.342 START TEST xnvme_to_malloc_dd_copy 00:11:30.342 ************************************ 00:11:30.342 09:49:19 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:30.342 09:49:19 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:30.342 09:49:19 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:30.342 09:49:19 -- dd/common.sh@191 -- # return 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@18 -- # local io 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:30.342 09:49:19 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:30.342 09:49:19 -- dd/common.sh@31 -- # xtrace_disable 00:11:30.342 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:11:30.602 { 00:11:30.602 "subsystems": [ 00:11:30.602 { 00:11:30.602 "subsystem": "bdev", 00:11:30.602 "config": [ 00:11:30.602 { 00:11:30.602 "params": { 00:11:30.602 "block_size": 512, 00:11:30.602 "num_blocks": 2097152, 00:11:30.602 "name": "malloc0" 00:11:30.602 }, 00:11:30.602 "method": "bdev_malloc_create" 00:11:30.602 }, 00:11:30.602 { 00:11:30.602 "params": { 00:11:30.602 "io_mechanism": "libaio", 00:11:30.602 "filename": "/dev/nullb0", 00:11:30.602 "name": "null0" 00:11:30.602 }, 00:11:30.602 "method": "bdev_xnvme_create" 00:11:30.602 }, 00:11:30.602 { 00:11:30.602 "method": "bdev_wait_for_examine" 00:11:30.602 } 00:11:30.602 ] 00:11:30.602 } 00:11:30.602 ] 00:11:30.602 } 00:11:30.602 [2024-12-15 09:49:19.394206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.602 [2024-12-15 09:49:19.394350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66804 ] 00:11:30.602 [2024-12-15 09:49:19.544879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.861 [2024-12-15 09:49:19.697100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.762  [2024-12-15T09:49:22.711Z] Copying: 309/1024 [MB] (309 MBps) [2024-12-15T09:49:23.646Z] Copying: 620/1024 [MB] (310 MBps) [2024-12-15T09:49:23.904Z] Copying: 930/1024 [MB] (310 MBps) [2024-12-15T09:49:25.810Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:11:36.794 00:11:36.794 09:49:25 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:36.794 09:49:25 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:36.794 09:49:25 -- dd/common.sh@31 -- # xtrace_disable 00:11:36.794 09:49:25 -- common/autotest_common.sh@10 -- # set +x 00:11:37.054 { 00:11:37.054 "subsystems": [ 00:11:37.054 { 00:11:37.054 "subsystem": "bdev", 00:11:37.054 "config": [ 00:11:37.054 { 00:11:37.054 "params": { 00:11:37.054 "block_size": 512, 00:11:37.054 "num_blocks": 2097152, 00:11:37.054 "name": "malloc0" 00:11:37.054 }, 00:11:37.054 "method": "bdev_malloc_create" 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "params": { 00:11:37.054 "io_mechanism": "libaio", 00:11:37.054 "filename": "/dev/nullb0", 00:11:37.054 "name": "null0" 00:11:37.054 }, 00:11:37.054 "method": "bdev_xnvme_create" 00:11:37.054 }, 00:11:37.054 { 00:11:37.054 "method": "bdev_wait_for_examine" 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 } 00:11:37.054 ] 00:11:37.054 } 00:11:37.054 [2024-12-15 09:49:25.838989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:37.054 [2024-12-15 09:49:25.839107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66891 ] 00:11:37.054 [2024-12-15 09:49:25.988293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.313 [2024-12-15 09:49:26.145199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.218  [2024-12-15T09:49:29.169Z] Copying: 312/1024 [MB] (312 MBps) [2024-12-15T09:49:30.104Z] Copying: 626/1024 [MB] (313 MBps) [2024-12-15T09:49:30.362Z] Copying: 940/1024 [MB] (313 MBps) [2024-12-15T09:49:32.306Z] Copying: 1024/1024 [MB] (average 313 MBps) 00:11:43.290 00:11:43.290 09:49:32 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:43.290 09:49:32 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:43.290 09:49:32 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:43.290 09:49:32 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:43.290 09:49:32 -- dd/common.sh@31 -- # xtrace_disable 00:11:43.290 09:49:32 -- common/autotest_common.sh@10 -- # set +x 00:11:43.290 { 00:11:43.290 "subsystems": [ 00:11:43.290 { 00:11:43.290 "subsystem": "bdev", 00:11:43.290 "config": [ 00:11:43.290 { 00:11:43.290 "params": { 00:11:43.290 "block_size": 512, 00:11:43.290 "num_blocks": 2097152, 00:11:43.290 "name": "malloc0" 00:11:43.290 }, 00:11:43.290 "method": "bdev_malloc_create" 00:11:43.290 }, 00:11:43.290 { 00:11:43.290 "params": { 00:11:43.290 "io_mechanism": "io_uring", 00:11:43.290 "filename": "/dev/nullb0", 00:11:43.290 "name": "null0" 00:11:43.290 }, 00:11:43.290 "method": "bdev_xnvme_create" 00:11:43.290 }, 00:11:43.290 { 00:11:43.290 "method": "bdev_wait_for_examine" 00:11:43.290 } 00:11:43.290 ] 00:11:43.290 } 00:11:43.290 ] 00:11:43.290 } 00:11:43.290 [2024-12-15 09:49:32.231987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.290 [2024-12-15 09:49:32.232107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66967 ] 00:11:43.549 [2024-12-15 09:49:32.380220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.549 [2024-12-15 09:49:32.535830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.451  [2024-12-15T09:49:35.400Z] Copying: 321/1024 [MB] (321 MBps) [2024-12-15T09:49:36.334Z] Copying: 642/1024 [MB] (321 MBps) [2024-12-15T09:49:36.592Z] Copying: 963/1024 [MB] (320 MBps) [2024-12-15T09:49:38.493Z] Copying: 1024/1024 [MB] (average 321 MBps) 00:11:49.477 00:11:49.477 09:49:38 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:49.477 09:49:38 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:49.477 09:49:38 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.477 09:49:38 -- common/autotest_common.sh@10 -- # set +x 00:11:49.736 { 00:11:49.736 "subsystems": [ 00:11:49.736 { 00:11:49.736 "subsystem": "bdev", 00:11:49.736 "config": [ 00:11:49.736 { 00:11:49.736 "params": { 00:11:49.736 "block_size": 512, 00:11:49.736 "num_blocks": 2097152, 00:11:49.736 "name": "malloc0" 00:11:49.736 }, 00:11:49.736 "method": "bdev_malloc_create" 00:11:49.736 }, 00:11:49.736 { 00:11:49.736 "params": { 00:11:49.736 "io_mechanism": "io_uring", 00:11:49.736 "filename": "/dev/nullb0", 00:11:49.736 "name": "null0" 00:11:49.736 }, 00:11:49.736 "method": "bdev_xnvme_create" 00:11:49.736 }, 00:11:49.736 { 00:11:49.736 "method": "bdev_wait_for_examine" 00:11:49.736 } 00:11:49.736 ] 00:11:49.736 } 00:11:49.736 ] 00:11:49.736 } 00:11:49.736 [2024-12-15 09:49:38.524717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:49.736 [2024-12-15 09:49:38.524833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67043 ] 00:11:49.736 [2024-12-15 09:49:38.674880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.995 [2024-12-15 09:49:38.825595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.900  [2024-12-15T09:49:41.850Z] Copying: 324/1024 [MB] (324 MBps) [2024-12-15T09:49:42.785Z] Copying: 649/1024 [MB] (325 MBps) [2024-12-15T09:49:42.785Z] Copying: 976/1024 [MB] (326 MBps) [2024-12-15T09:49:44.687Z] Copying: 1024/1024 [MB] (average 325 MBps) 00:11:55.671 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:55.931 09:49:44 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:55.931 00:11:55.931 real 0m25.443s 00:11:55.931 user 0m22.389s 00:11:55.931 sys 0m2.528s 00:11:55.931 ************************************ 00:11:55.931 END TEST xnvme_to_malloc_dd_copy 00:11:55.931 ************************************ 00:11:55.931 09:49:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:55.931 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:55.931 09:49:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:55.931 09:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.931 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.931 ************************************ 00:11:55.931 START TEST xnvme_bdevperf 00:11:55.931 ************************************ 00:11:55.931 09:49:44 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:55.931 09:49:44 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:55.931 09:49:44 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:55.931 09:49:44 -- dd/common.sh@191 -- # return 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@60 -- # local io 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:55.931 09:49:44 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:55.931 09:49:44 -- dd/common.sh@31 -- # xtrace_disable 00:11:55.931 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:11:55.931 { 00:11:55.931 "subsystems": [ 00:11:55.931 { 00:11:55.931 "subsystem": "bdev", 00:11:55.931 "config": [ 00:11:55.931 { 00:11:55.931 "params": { 00:11:55.931 "io_mechanism": "libaio", 00:11:55.931 "filename": "/dev/nullb0", 00:11:55.931 "name": "null0" 00:11:55.931 }, 00:11:55.931 "method": "bdev_xnvme_create" 00:11:55.931 }, 00:11:55.931 { 00:11:55.931 "method": "bdev_wait_for_examine" 00:11:55.931 } 00:11:55.931 ] 00:11:55.931 } 00:11:55.931 ] 00:11:55.931 } 00:11:55.931 [2024-12-15 09:49:44.884466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:55.931 [2024-12-15 09:49:44.884574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67142 ] 00:11:56.190 [2024-12-15 09:49:45.031815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.190 [2024-12-15 09:49:45.171763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.448 Running I/O for 5 seconds... 00:12:01.712 00:12:01.712 Latency(us) 00:12:01.712 [2024-12-15T09:49:50.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.712 [2024-12-15T09:49:50.728Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:01.712 null0 : 5.00 208507.16 814.48 0.00 0.00 304.68 115.00 617.55 00:12:01.712 [2024-12-15T09:49:50.728Z] =================================================================================================================== 00:12:01.712 [2024-12-15T09:49:50.728Z] Total : 208507.16 814.48 0.00 0.00 304.68 115.00 617.55 00:12:02.279 09:49:51 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:02.279 09:49:51 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:02.279 09:49:51 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:02.279 09:49:51 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:02.279 09:49:51 -- dd/common.sh@31 -- # xtrace_disable 00:12:02.279 09:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:02.279 { 00:12:02.279 "subsystems": [ 00:12:02.279 { 00:12:02.279 "subsystem": "bdev", 00:12:02.279 "config": [ 00:12:02.279 { 00:12:02.279 "params": { 00:12:02.279 "io_mechanism": "io_uring", 00:12:02.279 "filename": "/dev/nullb0", 00:12:02.279 "name": "null0" 00:12:02.279 }, 00:12:02.279 "method": "bdev_xnvme_create" 00:12:02.279 }, 00:12:02.279 { 00:12:02.279 "method": "bdev_wait_for_examine" 00:12:02.279 } 00:12:02.279 ] 00:12:02.279 } 00:12:02.279 ] 00:12:02.279 } 00:12:02.279 [2024-12-15 09:49:51.073940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:02.279 [2024-12-15 09:49:51.074047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67222 ] 00:12:02.279 [2024-12-15 09:49:51.222535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.538 [2024-12-15 09:49:51.375309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.796 Running I/O for 5 seconds... 00:12:08.060 00:12:08.060 Latency(us) 00:12:08.060 [2024-12-15T09:49:57.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.060 [2024-12-15T09:49:57.076Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:08.060 null0 : 5.00 237282.24 926.88 0.00 0.00 267.46 155.18 1310.72 00:12:08.060 [2024-12-15T09:49:57.076Z] =================================================================================================================== 00:12:08.060 [2024-12-15T09:49:57.076Z] Total : 237282.24 926.88 0.00 0.00 267.46 155.18 1310.72 00:12:08.320 09:49:57 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:08.320 09:49:57 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:08.320 00:12:08.320 real 0m12.397s 00:12:08.320 user 0m9.973s 00:12:08.320 sys 0m2.164s 00:12:08.320 09:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:08.320 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.320 ************************************ 00:12:08.320 END TEST xnvme_bdevperf 00:12:08.320 ************************************ 00:12:08.320 00:12:08.320 real 0m38.111s 00:12:08.320 user 0m32.468s 00:12:08.320 sys 0m4.821s 00:12:08.320 09:49:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:08.320 ************************************ 00:12:08.320 END TEST nvme_xnvme 00:12:08.320 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.320 ************************************ 00:12:08.320 09:49:57 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:08.320 09:49:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:08.320 09:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.320 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.320 ************************************ 00:12:08.320 START TEST blockdev_xnvme 00:12:08.320 ************************************ 00:12:08.320 09:49:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:08.581 * Looking for test storage... 00:12:08.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:08.581 09:49:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:08.581 09:49:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:08.581 09:49:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:08.581 09:49:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:08.581 09:49:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:08.581 09:49:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:08.581 09:49:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:08.581 09:49:57 -- scripts/common.sh@335 -- # IFS=.-: 00:12:08.581 09:49:57 -- scripts/common.sh@335 -- # read -ra ver1 00:12:08.581 09:49:57 -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.581 09:49:57 -- scripts/common.sh@336 -- # read -ra ver2 00:12:08.581 09:49:57 -- scripts/common.sh@337 -- # local 'op=<' 00:12:08.581 09:49:57 -- scripts/common.sh@339 -- # ver1_l=2 00:12:08.581 09:49:57 -- scripts/common.sh@340 -- # ver2_l=1 00:12:08.581 09:49:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:08.581 09:49:57 -- scripts/common.sh@343 -- # case "$op" in 00:12:08.581 09:49:57 -- scripts/common.sh@344 -- # : 1 00:12:08.581 09:49:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:08.581 09:49:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.581 09:49:57 -- scripts/common.sh@364 -- # decimal 1 00:12:08.581 09:49:57 -- scripts/common.sh@352 -- # local d=1 00:12:08.581 09:49:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.581 09:49:57 -- scripts/common.sh@354 -- # echo 1 00:12:08.581 09:49:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:08.581 09:49:57 -- scripts/common.sh@365 -- # decimal 2 00:12:08.582 09:49:57 -- scripts/common.sh@352 -- # local d=2 00:12:08.582 09:49:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.582 09:49:57 -- scripts/common.sh@354 -- # echo 2 00:12:08.582 09:49:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:08.582 09:49:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:08.582 09:49:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:08.582 09:49:57 -- scripts/common.sh@367 -- # return 0 00:12:08.582 09:49:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.582 09:49:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:08.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.582 --rc genhtml_branch_coverage=1 00:12:08.582 --rc genhtml_function_coverage=1 00:12:08.582 --rc genhtml_legend=1 00:12:08.582 --rc geninfo_all_blocks=1 00:12:08.582 --rc geninfo_unexecuted_blocks=1 00:12:08.582 00:12:08.582 ' 00:12:08.582 09:49:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:08.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.582 --rc genhtml_branch_coverage=1 00:12:08.582 --rc genhtml_function_coverage=1 00:12:08.582 --rc genhtml_legend=1 00:12:08.582 --rc geninfo_all_blocks=1 00:12:08.582 --rc geninfo_unexecuted_blocks=1 00:12:08.582 00:12:08.582 ' 00:12:08.582 09:49:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:08.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.582 --rc genhtml_branch_coverage=1 00:12:08.582 --rc genhtml_function_coverage=1 00:12:08.582 --rc genhtml_legend=1 00:12:08.582 --rc geninfo_all_blocks=1 00:12:08.582 --rc geninfo_unexecuted_blocks=1 00:12:08.582 00:12:08.582 ' 00:12:08.582 09:49:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:08.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.582 --rc genhtml_branch_coverage=1 00:12:08.582 --rc genhtml_function_coverage=1 00:12:08.582 --rc genhtml_legend=1 00:12:08.582 --rc geninfo_all_blocks=1 00:12:08.582 --rc geninfo_unexecuted_blocks=1 00:12:08.582 00:12:08.582 ' 00:12:08.582 09:49:57 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:08.582 09:49:57 -- bdev/nbd_common.sh@6 -- # set -e 00:12:08.582 09:49:57 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:08.582 09:49:57 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:08.582 09:49:57 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:08.582 09:49:57 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:08.582 09:49:57 -- bdev/blockdev.sh@18 -- # : 00:12:08.582 09:49:57 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:08.582 09:49:57 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:08.582 09:49:57 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:08.582 09:49:57 -- bdev/blockdev.sh@672 -- # uname -s 00:12:08.582 09:49:57 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:08.582 09:49:57 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:08.582 09:49:57 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:08.582 09:49:57 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:08.582 09:49:57 -- bdev/blockdev.sh@682 -- # dek= 00:12:08.582 09:49:57 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:08.582 09:49:57 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:08.582 09:49:57 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:08.582 09:49:57 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:08.582 09:49:57 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:08.582 09:49:57 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:08.582 09:49:57 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67364 00:12:08.582 09:49:57 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:08.582 09:49:57 -- bdev/blockdev.sh@47 -- # waitforlisten 67364 00:12:08.582 09:49:57 -- common/autotest_common.sh@829 -- # '[' -z 67364 ']' 00:12:08.582 09:49:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.582 09:49:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.582 09:49:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.582 09:49:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.582 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:08.582 09:49:57 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:08.582 [2024-12-15 09:49:57.521172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:08.582 [2024-12-15 09:49:57.521325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67364 ] 00:12:08.841 [2024-12-15 09:49:57.671732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.841 [2024-12-15 09:49:57.812396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.841 [2024-12-15 09:49:57.812546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.407 09:49:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.408 09:49:58 -- common/autotest_common.sh@862 -- # return 0 00:12:09.408 09:49:58 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:09.408 09:49:58 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:09.408 09:49:58 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:09.408 09:49:58 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:09.408 09:49:58 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:09.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.974 Waiting for block devices as requested 00:12:09.974 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.974 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.974 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.974 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.240 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:15.240 09:50:04 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:15.240 09:50:04 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:15.240 09:50:04 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:15.240 09:50:04 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:15.240 09:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:15.240 09:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:15.240 09:50:04 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:15.240 09:50:04 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:15.240 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.240 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.240 09:50:04 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:15.240 nvme0n1 00:12:15.240 nvme1n1 00:12:15.240 nvme1n2 00:12:15.240 nvme1n3 00:12:15.240 nvme2n1 00:12:15.240 nvme3n1 00:12:15.240 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:15.240 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.240 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.240 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.240 09:50:04 -- bdev/blockdev.sh@738 -- # cat 00:12:15.240 09:50:04 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:15.240 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.240 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.240 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.241 09:50:04 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:15.241 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.241 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.241 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.241 09:50:04 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:15.241 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.241 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.241 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.241 09:50:04 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:15.241 09:50:04 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:15.241 09:50:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.241 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:12:15.241 09:50:04 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:15.241 09:50:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.241 09:50:04 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:15.241 09:50:04 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:15.241 09:50:04 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e2a5e2e8-7f8c-4a1e-8e46-345b0f0ce52f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e2a5e2e8-7f8c-4a1e-8e46-345b0f0ce52f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5cd725c7-4f0c-40de-a6cd-4f95fdb211ec"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5cd725c7-4f0c-40de-a6cd-4f95fdb211ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "62aeb4d7-2679-4d8a-8fba-b45334d928ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "62aeb4d7-2679-4d8a-8fba-b45334d928ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e967c824-0e90-4715-8038-861578c5cf16"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e967c824-0e90-4715-8038-861578c5cf16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4d043fb4-0a11-46f8-b581-3097c74bd53c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d043fb4-0a11-46f8-b581-3097c74bd53c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b7dfef40-3df9-40ff-8f80-16a2fcc722e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b7dfef40-3df9-40ff-8f80-16a2fcc722e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:15.241 09:50:04 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:15.241 09:50:04 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:15.241 09:50:04 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:15.241 09:50:04 -- bdev/blockdev.sh@752 -- # killprocess 67364 00:12:15.241 09:50:04 -- common/autotest_common.sh@936 -- # '[' -z 67364 ']' 00:12:15.241 09:50:04 -- common/autotest_common.sh@940 -- # kill -0 67364 00:12:15.241 09:50:04 -- common/autotest_common.sh@941 -- # uname 00:12:15.241 09:50:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.241 09:50:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67364 00:12:15.502 09:50:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.502 killing process with pid 67364 00:12:15.502 09:50:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.502 09:50:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67364' 00:12:15.502 09:50:04 -- common/autotest_common.sh@955 -- # kill 67364 00:12:15.502 09:50:04 -- common/autotest_common.sh@960 -- # wait 67364 00:12:16.936 09:50:05 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:16.936 09:50:05 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:16.936 09:50:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:16.936 09:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.936 09:50:05 -- common/autotest_common.sh@10 -- # set +x 00:12:16.936 ************************************ 00:12:16.936 START TEST bdev_hello_world 00:12:16.936 ************************************ 00:12:16.936 09:50:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:16.936 [2024-12-15 09:50:05.755323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:16.936 [2024-12-15 09:50:05.755430] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67744 ] 00:12:16.936 [2024-12-15 09:50:05.902580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.194 [2024-12-15 09:50:06.052766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.453 [2024-12-15 09:50:06.335037] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:17.453 [2024-12-15 09:50:06.335072] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:17.453 [2024-12-15 09:50:06.335083] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:17.453 [2024-12-15 09:50:06.336505] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:17.453 [2024-12-15 09:50:06.336870] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:17.453 [2024-12-15 09:50:06.336889] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:17.453 [2024-12-15 09:50:06.337098] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:17.453 00:12:17.453 [2024-12-15 09:50:06.337121] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:18.021 00:12:18.021 real 0m1.256s 00:12:18.021 user 0m1.001s 00:12:18.021 sys 0m0.143s 00:12:18.021 ************************************ 00:12:18.021 END TEST bdev_hello_world 00:12:18.021 ************************************ 00:12:18.021 09:50:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.021 09:50:06 -- common/autotest_common.sh@10 -- # set +x 00:12:18.021 09:50:06 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:18.021 09:50:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.021 09:50:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.021 09:50:06 -- common/autotest_common.sh@10 -- # set +x 00:12:18.021 ************************************ 00:12:18.021 START TEST bdev_bounds 00:12:18.021 ************************************ 00:12:18.021 09:50:06 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:18.021 09:50:06 -- bdev/blockdev.sh@288 -- # bdevio_pid=67776 00:12:18.021 09:50:06 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:18.021 Process bdevio pid: 67776 00:12:18.021 09:50:06 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67776' 00:12:18.021 09:50:06 -- bdev/blockdev.sh@291 -- # waitforlisten 67776 00:12:18.021 09:50:06 -- common/autotest_common.sh@829 -- # '[' -z 67776 ']' 00:12:18.021 09:50:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.021 09:50:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.021 09:50:06 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:18.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.021 09:50:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.021 09:50:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.021 09:50:06 -- common/autotest_common.sh@10 -- # set +x 00:12:18.280 [2024-12-15 09:50:07.058537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:18.281 [2024-12-15 09:50:07.058643] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67776 ] 00:12:18.281 [2024-12-15 09:50:07.207297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:18.540 [2024-12-15 09:50:07.426196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.540 [2024-12-15 09:50:07.426513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.540 [2024-12-15 09:50:07.426603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.113 09:50:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.113 09:50:07 -- common/autotest_common.sh@862 -- # return 0 00:12:19.113 09:50:07 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:19.113 I/O targets: 00:12:19.113 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:19.113 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.113 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.113 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:19.113 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:19.113 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:19.113 00:12:19.113 00:12:19.113 CUnit - A unit testing framework for C - Version 2.1-3 00:12:19.113 http://cunit.sourceforge.net/ 00:12:19.113 00:12:19.113 00:12:19.113 Suite: bdevio tests on: nvme3n1 00:12:19.113 Test: blockdev write read block ...passed 00:12:19.113 Test: blockdev write zeroes read block ...passed 00:12:19.113 Test: blockdev write zeroes read no split ...passed 00:12:19.113 Test: blockdev write zeroes read split ...passed 00:12:19.113 Test: blockdev write zeroes read split partial ...passed 00:12:19.113 Test: blockdev reset ...passed 00:12:19.113 Test: blockdev write read 8 blocks ...passed 00:12:19.113 Test: blockdev write read size > 128k ...passed 00:12:19.113 Test: blockdev write read invalid size ...passed 00:12:19.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.113 Test: blockdev write read max offset ...passed 00:12:19.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.113 Test: blockdev writev readv 8 blocks ...passed 00:12:19.113 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.113 Test: blockdev writev readv block ...passed 00:12:19.113 Test: blockdev writev readv size > 128k ...passed 00:12:19.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.113 Test: blockdev comparev and writev ...passed 00:12:19.113 Test: blockdev nvme passthru rw ...passed 00:12:19.113 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.113 Test: blockdev nvme admin passthru ...passed 00:12:19.113 Test: blockdev copy ...passed 00:12:19.113 Suite: bdevio tests on: nvme2n1 00:12:19.113 Test: blockdev write read block ...passed 00:12:19.113 Test: blockdev write zeroes read block ...passed 00:12:19.113 Test: blockdev write zeroes read no split ...passed 00:12:19.113 Test: blockdev write zeroes read split ...passed 00:12:19.113 Test: blockdev write zeroes read split partial ...passed 00:12:19.113 Test: blockdev reset ...passed 00:12:19.400 Test: blockdev write read 8 blocks ...passed 00:12:19.400 Test: blockdev write read size > 128k ...passed 00:12:19.400 Test: blockdev write read invalid size ...passed 00:12:19.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.400 Test: blockdev write read max offset ...passed 00:12:19.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.400 Test: blockdev writev readv 8 blocks ...passed 00:12:19.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.400 Test: blockdev writev readv block ...passed 00:12:19.400 Test: blockdev writev readv size > 128k ...passed 00:12:19.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.400 Test: blockdev comparev and writev ...passed 00:12:19.400 Test: blockdev nvme passthru rw ...passed 00:12:19.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.400 Test: blockdev nvme admin passthru ...passed 00:12:19.400 Test: blockdev copy ...passed 00:12:19.400 Suite: bdevio tests on: nvme1n3 00:12:19.400 Test: blockdev write read block ...passed 00:12:19.400 Test: blockdev write zeroes read block ...passed 00:12:19.400 Test: blockdev write zeroes read no split ...passed 00:12:19.400 Test: blockdev write zeroes read split ...passed 00:12:19.400 Test: blockdev write zeroes read split partial ...passed 00:12:19.400 Test: blockdev reset ...passed 00:12:19.400 Test: blockdev write read 8 blocks ...passed 00:12:19.400 Test: blockdev write read size > 128k ...passed 00:12:19.400 Test: blockdev write read invalid size ...passed 00:12:19.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.400 Test: blockdev write read max offset ...passed 00:12:19.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.400 Test: blockdev writev readv 8 blocks ...passed 00:12:19.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.400 Test: blockdev writev readv block ...passed 00:12:19.400 Test: blockdev writev readv size > 128k ...passed 00:12:19.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.400 Test: blockdev comparev and writev ...passed 00:12:19.400 Test: blockdev nvme passthru rw ...passed 00:12:19.400 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.400 Test: blockdev nvme admin passthru ...passed 00:12:19.400 Test: blockdev copy ...passed 00:12:19.400 Suite: bdevio tests on: nvme1n2 00:12:19.400 Test: blockdev write read block ...passed 00:12:19.400 Test: blockdev write zeroes read block ...passed 00:12:19.400 Test: blockdev write zeroes read no split ...passed 00:12:19.400 Test: blockdev write zeroes read split ...passed 00:12:19.400 Test: blockdev write zeroes read split partial ...passed 00:12:19.400 Test: blockdev reset ...passed 00:12:19.400 Test: blockdev write read 8 blocks ...passed 00:12:19.400 Test: blockdev write read size > 128k ...passed 00:12:19.400 Test: blockdev write read invalid size ...passed 00:12:19.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.400 Test: blockdev write read max offset ...passed 00:12:19.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.400 Test: blockdev writev readv 8 blocks ...passed 00:12:19.400 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.400 Test: blockdev writev readv block ...passed 00:12:19.400 Test: blockdev writev readv size > 128k ...passed 00:12:19.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.400 Test: blockdev comparev and writev ...passed 00:12:19.400 Test: blockdev nvme passthru rw ...passed 00:12:19.401 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.401 Test: blockdev nvme admin passthru ...passed 00:12:19.401 Test: blockdev copy ...passed 00:12:19.401 Suite: bdevio tests on: nvme1n1 00:12:19.401 Test: blockdev write read block ...passed 00:12:19.401 Test: blockdev write zeroes read block ...passed 00:12:19.401 Test: blockdev write zeroes read no split ...passed 00:12:19.401 Test: blockdev write zeroes read split ...passed 00:12:19.401 Test: blockdev write zeroes read split partial ...passed 00:12:19.401 Test: blockdev reset ...passed 00:12:19.401 Test: blockdev write read 8 blocks ...passed 00:12:19.401 Test: blockdev write read size > 128k ...passed 00:12:19.401 Test: blockdev write read invalid size ...passed 00:12:19.401 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.401 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.401 Test: blockdev write read max offset ...passed 00:12:19.401 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.401 Test: blockdev writev readv 8 blocks ...passed 00:12:19.401 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.401 Test: blockdev writev readv block ...passed 00:12:19.401 Test: blockdev writev readv size > 128k ...passed 00:12:19.401 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.401 Test: blockdev comparev and writev ...passed 00:12:19.401 Test: blockdev nvme passthru rw ...passed 00:12:19.401 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.401 Test: blockdev nvme admin passthru ...passed 00:12:19.401 Test: blockdev copy ...passed 00:12:19.401 Suite: bdevio tests on: nvme0n1 00:12:19.401 Test: blockdev write read block ...passed 00:12:19.401 Test: blockdev write zeroes read block ...passed 00:12:19.401 Test: blockdev write zeroes read no split ...passed 00:12:19.401 Test: blockdev write zeroes read split ...passed 00:12:19.661 Test: blockdev write zeroes read split partial ...passed 00:12:19.661 Test: blockdev reset ...passed 00:12:19.661 Test: blockdev write read 8 blocks ...passed 00:12:19.661 Test: blockdev write read size > 128k ...passed 00:12:19.661 Test: blockdev write read invalid size ...passed 00:12:19.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:19.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:19.661 Test: blockdev write read max offset ...passed 00:12:19.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:19.661 Test: blockdev writev readv 8 blocks ...passed 00:12:19.661 Test: blockdev writev readv 30 x 1block ...passed 00:12:19.661 Test: blockdev writev readv block ...passed 00:12:19.661 Test: blockdev writev readv size > 128k ...passed 00:12:19.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:19.661 Test: blockdev comparev and writev ...passed 00:12:19.661 Test: blockdev nvme passthru rw ...passed 00:12:19.661 Test: blockdev nvme passthru vendor specific ...passed 00:12:19.661 Test: blockdev nvme admin passthru ...passed 00:12:19.661 Test: blockdev copy ...passed 00:12:19.661 00:12:19.661 Run Summary: Type Total Ran Passed Failed Inactive 00:12:19.661 suites 6 6 n/a 0 0 00:12:19.661 tests 138 138 138 0 0 00:12:19.661 asserts 780 780 780 0 n/a 00:12:19.661 00:12:19.661 Elapsed time = 1.160 seconds 00:12:19.661 0 00:12:19.661 09:50:08 -- bdev/blockdev.sh@293 -- # killprocess 67776 00:12:19.661 09:50:08 -- common/autotest_common.sh@936 -- # '[' -z 67776 ']' 00:12:19.661 09:50:08 -- common/autotest_common.sh@940 -- # kill -0 67776 00:12:19.661 09:50:08 -- common/autotest_common.sh@941 -- # uname 00:12:19.661 09:50:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.661 09:50:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67776 00:12:19.661 killing process with pid 67776 00:12:19.661 09:50:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.661 09:50:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.661 09:50:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67776' 00:12:19.661 09:50:08 -- common/autotest_common.sh@955 -- # kill 67776 00:12:19.661 09:50:08 -- common/autotest_common.sh@960 -- # wait 67776 00:12:20.231 09:50:09 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:20.231 00:12:20.231 real 0m2.228s 00:12:20.231 user 0m5.148s 00:12:20.231 sys 0m0.355s 00:12:20.231 09:50:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:20.231 ************************************ 00:12:20.231 END TEST bdev_bounds 00:12:20.231 ************************************ 00:12:20.231 09:50:09 -- common/autotest_common.sh@10 -- # set +x 00:12:20.491 09:50:09 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:20.491 09:50:09 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:20.491 09:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.491 09:50:09 -- common/autotest_common.sh@10 -- # set +x 00:12:20.491 ************************************ 00:12:20.491 START TEST bdev_nbd 00:12:20.491 ************************************ 00:12:20.491 09:50:09 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:20.491 09:50:09 -- bdev/blockdev.sh@298 -- # uname -s 00:12:20.491 09:50:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:20.491 09:50:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.491 09:50:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:20.491 09:50:09 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:20.491 09:50:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:20.491 09:50:09 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:20.491 09:50:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:20.491 09:50:09 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:20.491 09:50:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:20.491 09:50:09 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:20.491 09:50:09 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:20.491 09:50:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:20.491 09:50:09 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:20.491 09:50:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:20.491 09:50:09 -- bdev/blockdev.sh@316 -- # nbd_pid=67832 00:12:20.491 09:50:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:20.491 09:50:09 -- bdev/blockdev.sh@318 -- # waitforlisten 67832 /var/tmp/spdk-nbd.sock 00:12:20.491 09:50:09 -- common/autotest_common.sh@829 -- # '[' -z 67832 ']' 00:12:20.491 09:50:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:20.491 09:50:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:20.491 09:50:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:20.491 09:50:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:20.491 09:50:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.491 09:50:09 -- common/autotest_common.sh@10 -- # set +x 00:12:20.491 [2024-12-15 09:50:09.350574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:20.491 [2024-12-15 09:50:09.350773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.491 [2024-12-15 09:50:09.489013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.751 [2024-12-15 09:50:09.658149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.324 09:50:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.324 09:50:10 -- common/autotest_common.sh@862 -- # return 0 00:12:21.324 09:50:10 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@24 -- # local i 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.324 09:50:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:21.585 09:50:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:21.585 09:50:10 -- common/autotest_common.sh@867 -- # local i 00:12:21.585 09:50:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:21.585 09:50:10 -- common/autotest_common.sh@871 -- # break 00:12:21.585 09:50:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.585 1+0 records in 00:12:21.585 1+0 records out 00:12:21.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974815 s, 4.2 MB/s 00:12:21.585 09:50:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.585 09:50:10 -- common/autotest_common.sh@884 -- # size=4096 00:12:21.585 09:50:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.585 09:50:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:21.585 09:50:10 -- common/autotest_common.sh@887 -- # return 0 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:21.585 09:50:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:21.585 09:50:10 -- common/autotest_common.sh@867 -- # local i 00:12:21.585 09:50:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:21.585 09:50:10 -- common/autotest_common.sh@871 -- # break 00:12:21.585 09:50:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:21.585 09:50:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.585 1+0 records in 00:12:21.585 1+0 records out 00:12:21.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000993661 s, 4.1 MB/s 00:12:21.585 09:50:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.585 09:50:10 -- common/autotest_common.sh@884 -- # size=4096 00:12:21.585 09:50:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.585 09:50:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:21.585 09:50:10 -- common/autotest_common.sh@887 -- # return 0 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.585 09:50:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:21.846 09:50:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:21.846 09:50:10 -- common/autotest_common.sh@867 -- # local i 00:12:21.846 09:50:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:21.846 09:50:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:21.846 09:50:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:21.846 09:50:10 -- common/autotest_common.sh@871 -- # break 00:12:21.846 09:50:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:21.846 09:50:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:21.846 09:50:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.846 1+0 records in 00:12:21.846 1+0 records out 00:12:21.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000963161 s, 4.3 MB/s 00:12:21.846 09:50:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.846 09:50:10 -- common/autotest_common.sh@884 -- # size=4096 00:12:21.846 09:50:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.846 09:50:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:21.846 09:50:10 -- common/autotest_common.sh@887 -- # return 0 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:21.846 09:50:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:22.107 09:50:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:22.107 09:50:11 -- common/autotest_common.sh@867 -- # local i 00:12:22.107 09:50:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:22.107 09:50:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:22.107 09:50:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:22.107 09:50:11 -- common/autotest_common.sh@871 -- # break 00:12:22.107 09:50:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:22.107 09:50:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:22.107 09:50:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.107 1+0 records in 00:12:22.107 1+0 records out 00:12:22.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115704 s, 3.5 MB/s 00:12:22.107 09:50:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.107 09:50:11 -- common/autotest_common.sh@884 -- # size=4096 00:12:22.107 09:50:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.107 09:50:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:22.107 09:50:11 -- common/autotest_common.sh@887 -- # return 0 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.107 09:50:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:22.368 09:50:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:22.368 09:50:11 -- common/autotest_common.sh@867 -- # local i 00:12:22.368 09:50:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:22.368 09:50:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:22.368 09:50:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:22.368 09:50:11 -- common/autotest_common.sh@871 -- # break 00:12:22.368 09:50:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:22.368 09:50:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:22.368 09:50:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.368 1+0 records in 00:12:22.368 1+0 records out 00:12:22.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113052 s, 3.6 MB/s 00:12:22.368 09:50:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.368 09:50:11 -- common/autotest_common.sh@884 -- # size=4096 00:12:22.368 09:50:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.368 09:50:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:22.368 09:50:11 -- common/autotest_common.sh@887 -- # return 0 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.368 09:50:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:22.630 09:50:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:22.630 09:50:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:22.630 09:50:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:22.630 09:50:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:22.630 09:50:11 -- common/autotest_common.sh@867 -- # local i 00:12:22.630 09:50:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:22.630 09:50:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:22.630 09:50:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:22.630 09:50:11 -- common/autotest_common.sh@871 -- # break 00:12:22.630 09:50:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:22.630 09:50:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:22.630 09:50:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.630 1+0 records in 00:12:22.630 1+0 records out 00:12:22.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876442 s, 4.7 MB/s 00:12:22.631 09:50:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.631 09:50:11 -- common/autotest_common.sh@884 -- # size=4096 00:12:22.631 09:50:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.631 09:50:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:22.631 09:50:11 -- common/autotest_common.sh@887 -- # return 0 00:12:22.631 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:22.631 09:50:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:22.631 09:50:11 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd0", 00:12:22.892 "bdev_name": "nvme0n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd1", 00:12:22.892 "bdev_name": "nvme1n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd2", 00:12:22.892 "bdev_name": "nvme1n2" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd3", 00:12:22.892 "bdev_name": "nvme1n3" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd4", 00:12:22.892 "bdev_name": "nvme2n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd5", 00:12:22.892 "bdev_name": "nvme3n1" 00:12:22.892 } 00:12:22.892 ]' 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd0", 00:12:22.892 "bdev_name": "nvme0n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd1", 00:12:22.892 "bdev_name": "nvme1n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd2", 00:12:22.892 "bdev_name": "nvme1n2" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd3", 00:12:22.892 "bdev_name": "nvme1n3" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd4", 00:12:22.892 "bdev_name": "nvme2n1" 00:12:22.892 }, 00:12:22.892 { 00:12:22.892 "nbd_device": "/dev/nbd5", 00:12:22.892 "bdev_name": "nvme3n1" 00:12:22.892 } 00:12:22.892 ]' 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@51 -- # local i 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.892 09:50:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:23.152 09:50:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.152 09:50:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.152 09:50:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@41 -- # break 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.153 09:50:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.153 09:50:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.413 09:50:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.673 09:50:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@41 -- # break 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.934 09:50:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@65 -- # true 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@65 -- # count=0 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@122 -- # count=0 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@127 -- # return 0 00:12:24.196 09:50:13 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@12 -- # local i 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:24.196 09:50:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:24.456 /dev/nbd0 00:12:24.456 09:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.456 09:50:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.456 09:50:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:24.456 09:50:13 -- common/autotest_common.sh@867 -- # local i 00:12:24.456 09:50:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.456 09:50:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.456 09:50:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:24.456 09:50:13 -- common/autotest_common.sh@871 -- # break 00:12:24.456 09:50:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.456 09:50:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.456 09:50:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.456 1+0 records in 00:12:24.456 1+0 records out 00:12:24.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071854 s, 5.7 MB/s 00:12:24.456 09:50:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.456 09:50:13 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.456 09:50:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.456 09:50:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.456 09:50:13 -- common/autotest_common.sh@887 -- # return 0 00:12:24.456 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.456 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:24.456 09:50:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:24.716 /dev/nbd1 00:12:24.716 09:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:24.716 09:50:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:24.716 09:50:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:24.716 09:50:13 -- common/autotest_common.sh@867 -- # local i 00:12:24.716 09:50:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.716 09:50:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.716 09:50:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:24.716 09:50:13 -- common/autotest_common.sh@871 -- # break 00:12:24.716 09:50:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.716 09:50:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.716 09:50:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.716 1+0 records in 00:12:24.716 1+0 records out 00:12:24.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769931 s, 5.3 MB/s 00:12:24.716 09:50:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.716 09:50:13 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.716 09:50:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.716 09:50:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.716 09:50:13 -- common/autotest_common.sh@887 -- # return 0 00:12:24.716 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.716 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:24.716 09:50:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:24.976 /dev/nbd10 00:12:24.976 09:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:24.976 09:50:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:24.976 09:50:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:24.976 09:50:13 -- common/autotest_common.sh@867 -- # local i 00:12:24.976 09:50:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.976 09:50:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.977 09:50:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:24.977 09:50:13 -- common/autotest_common.sh@871 -- # break 00:12:24.977 09:50:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.977 09:50:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.977 09:50:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.977 1+0 records in 00:12:24.977 1+0 records out 00:12:24.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107815 s, 3.8 MB/s 00:12:24.977 09:50:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.977 09:50:13 -- common/autotest_common.sh@884 -- # size=4096 00:12:24.977 09:50:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.977 09:50:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.977 09:50:13 -- common/autotest_common.sh@887 -- # return 0 00:12:24.977 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.977 09:50:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:24.977 09:50:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:24.977 /dev/nbd11 00:12:25.236 09:50:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:25.236 09:50:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:25.236 09:50:14 -- common/autotest_common.sh@867 -- # local i 00:12:25.236 09:50:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:25.236 09:50:14 -- common/autotest_common.sh@871 -- # break 00:12:25.236 09:50:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.236 1+0 records in 00:12:25.236 1+0 records out 00:12:25.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052328 s, 7.8 MB/s 00:12:25.236 09:50:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.236 09:50:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.236 09:50:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.236 09:50:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.236 09:50:14 -- common/autotest_common.sh@887 -- # return 0 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:25.236 /dev/nbd12 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:25.236 09:50:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:25.236 09:50:14 -- common/autotest_common.sh@867 -- # local i 00:12:25.236 09:50:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:25.236 09:50:14 -- common/autotest_common.sh@871 -- # break 00:12:25.236 09:50:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.236 09:50:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.236 1+0 records in 00:12:25.236 1+0 records out 00:12:25.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136428 s, 3.0 MB/s 00:12:25.236 09:50:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.236 09:50:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.236 09:50:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.236 09:50:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.236 09:50:14 -- common/autotest_common.sh@887 -- # return 0 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.236 09:50:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:25.494 /dev/nbd13 00:12:25.494 09:50:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:25.494 09:50:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:25.494 09:50:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:25.494 09:50:14 -- common/autotest_common.sh@867 -- # local i 00:12:25.494 09:50:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.494 09:50:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.494 09:50:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:25.494 09:50:14 -- common/autotest_common.sh@871 -- # break 00:12:25.494 09:50:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.494 09:50:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.494 09:50:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.494 1+0 records in 00:12:25.494 1+0 records out 00:12:25.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103699 s, 3.9 MB/s 00:12:25.494 09:50:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.494 09:50:14 -- common/autotest_common.sh@884 -- # size=4096 00:12:25.494 09:50:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.494 09:50:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.494 09:50:14 -- common/autotest_common.sh@887 -- # return 0 00:12:25.494 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.495 09:50:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:25.495 09:50:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:25.495 09:50:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.495 09:50:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd0", 00:12:25.755 "bdev_name": "nvme0n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd1", 00:12:25.755 "bdev_name": "nvme1n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd10", 00:12:25.755 "bdev_name": "nvme1n2" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd11", 00:12:25.755 "bdev_name": "nvme1n3" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd12", 00:12:25.755 "bdev_name": "nvme2n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd13", 00:12:25.755 "bdev_name": "nvme3n1" 00:12:25.755 } 00:12:25.755 ]' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd0", 00:12:25.755 "bdev_name": "nvme0n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd1", 00:12:25.755 "bdev_name": "nvme1n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd10", 00:12:25.755 "bdev_name": "nvme1n2" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd11", 00:12:25.755 "bdev_name": "nvme1n3" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd12", 00:12:25.755 "bdev_name": "nvme2n1" 00:12:25.755 }, 00:12:25.755 { 00:12:25.755 "nbd_device": "/dev/nbd13", 00:12:25.755 "bdev_name": "nvme3n1" 00:12:25.755 } 00:12:25.755 ]' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:25.755 /dev/nbd1 00:12:25.755 /dev/nbd10 00:12:25.755 /dev/nbd11 00:12:25.755 /dev/nbd12 00:12:25.755 /dev/nbd13' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:25.755 /dev/nbd1 00:12:25.755 /dev/nbd10 00:12:25.755 /dev/nbd11 00:12:25.755 /dev/nbd12 00:12:25.755 /dev/nbd13' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@65 -- # count=6 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@95 -- # count=6 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:25.755 256+0 records in 00:12:25.755 256+0 records out 00:12:25.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522627 s, 201 MB/s 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:25.755 256+0 records in 00:12:25.755 256+0 records out 00:12:25.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0934701 s, 11.2 MB/s 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.755 09:50:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:26.016 256+0 records in 00:12:26.016 256+0 records out 00:12:26.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0939751 s, 11.2 MB/s 00:12:26.016 09:50:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.016 09:50:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:26.016 256+0 records in 00:12:26.016 256+0 records out 00:12:26.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104827 s, 10.0 MB/s 00:12:26.016 09:50:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.016 09:50:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:26.277 256+0 records in 00:12:26.277 256+0 records out 00:12:26.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161272 s, 6.5 MB/s 00:12:26.277 09:50:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.277 09:50:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:26.538 256+0 records in 00:12:26.538 256+0 records out 00:12:26.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257912 s, 4.1 MB/s 00:12:26.538 09:50:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.538 09:50:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:26.799 256+0 records in 00:12:26.799 256+0 records out 00:12:26.799 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.213457 s, 4.9 MB/s 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@51 -- # local i 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.799 09:50:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@41 -- # break 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.061 09:50:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@41 -- # break 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@41 -- # break 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.322 09:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.583 09:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:27.584 09:50:16 -- bdev/nbd_common.sh@41 -- # break 00:12:27.584 09:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.584 09:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.584 09:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@41 -- # break 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.844 09:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@41 -- # break 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.104 09:50:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@65 -- # true 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@65 -- # count=0 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@104 -- # count=0 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@109 -- # return 0 00:12:28.104 09:50:17 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:28.104 09:50:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:28.365 malloc_lvol_verify 00:12:28.365 09:50:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:28.626 403f0bd0-2d51-4e7c-9610-25d8d123d60c 00:12:28.626 09:50:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:28.626 83c009b0-756a-4306-a475-a61eec127bf0 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:28.886 /dev/nbd0 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:28.886 mke2fs 1.47.0 (5-Feb-2023) 00:12:28.886 Discarding device blocks: 0/4096 done 00:12:28.886 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:28.886 00:12:28.886 Allocating group tables: 0/1 done 00:12:28.886 Writing inode tables: 0/1 done 00:12:28.886 Creating journal (1024 blocks): done 00:12:28.886 Writing superblocks and filesystem accounting information: 0/1 done 00:12:28.886 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@51 -- # local i 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.886 09:50:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@41 -- # break 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:29.147 09:50:18 -- bdev/nbd_common.sh@147 -- # return 0 00:12:29.147 09:50:18 -- bdev/blockdev.sh@324 -- # killprocess 67832 00:12:29.147 09:50:18 -- common/autotest_common.sh@936 -- # '[' -z 67832 ']' 00:12:29.147 09:50:18 -- common/autotest_common.sh@940 -- # kill -0 67832 00:12:29.147 09:50:18 -- common/autotest_common.sh@941 -- # uname 00:12:29.147 09:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.147 09:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67832 00:12:29.147 09:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.147 09:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.147 killing process with pid 67832 00:12:29.147 09:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67832' 00:12:29.147 09:50:18 -- common/autotest_common.sh@955 -- # kill 67832 00:12:29.147 09:50:18 -- common/autotest_common.sh@960 -- # wait 67832 00:12:30.090 09:50:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:30.090 00:12:30.090 real 0m9.515s 00:12:30.090 user 0m13.007s 00:12:30.090 sys 0m3.199s 00:12:30.090 ************************************ 00:12:30.090 END TEST bdev_nbd 00:12:30.090 ************************************ 00:12:30.090 09:50:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.090 09:50:18 -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 09:50:18 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:30.090 09:50:18 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.090 09:50:18 -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 ************************************ 00:12:30.090 START TEST bdev_fio 00:12:30.090 ************************************ 00:12:30.090 09:50:18 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:30.090 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:30.090 09:50:18 -- bdev/blockdev.sh@329 -- # local env_context 00:12:30.090 09:50:18 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:30.090 09:50:18 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:30.090 09:50:18 -- bdev/blockdev.sh@337 -- # echo '' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:30.090 09:50:18 -- bdev/blockdev.sh@337 -- # env_context= 00:12:30.090 09:50:18 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.090 09:50:18 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:30.090 09:50:18 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:30.090 09:50:18 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:30.090 09:50:18 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:30.090 09:50:18 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:30.090 09:50:18 -- common/autotest_common.sh@1290 -- # cat 00:12:30.090 09:50:18 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1303 -- # cat 00:12:30.090 09:50:18 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:30.090 09:50:18 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:30.090 09:50:18 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:30.090 09:50:18 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:30.090 09:50:18 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:30.090 09:50:18 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:30.090 09:50:18 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.090 09:50:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.090 09:50:18 -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 ************************************ 00:12:30.090 START TEST bdev_fio_rw_verify 00:12:30.090 ************************************ 00:12:30.090 09:50:18 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.090 09:50:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.090 09:50:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:30.090 09:50:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.090 09:50:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:30.090 09:50:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.090 09:50:18 -- common/autotest_common.sh@1330 -- # shift 00:12:30.090 09:50:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:30.090 09:50:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.090 09:50:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.090 09:50:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:30.090 09:50:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.090 09:50:18 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.090 09:50:18 -- common/autotest_common.sh@1336 -- # break 00:12:30.090 09:50:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:30.090 09:50:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:30.351 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:30.351 fio-3.35 00:12:30.351 Starting 6 threads 00:12:42.616 00:12:42.616 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68227: Sun Dec 15 09:50:29 2024 00:12:42.616 read: IOPS=11.1k, BW=43.5MiB/s (45.7MB/s)(436MiB/10001msec) 00:12:42.616 slat (usec): min=2, max=3319, avg= 6.72, stdev=22.87 00:12:42.616 clat (usec): min=83, max=13946, avg=1848.56, stdev=909.29 00:12:42.616 lat (usec): min=87, max=13961, avg=1855.27, stdev=909.90 00:12:42.616 clat percentiles (usec): 00:12:42.616 | 50.000th=[ 1729], 99.000th=[ 4686], 99.900th=[ 5997], 99.990th=[10814], 00:12:42.616 | 99.999th=[13960] 00:12:42.616 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(448MiB/10001msec); 0 zone resets 00:12:42.616 slat (usec): min=10, max=5316, avg=45.31, stdev=173.14 00:12:42.616 clat (usec): min=104, max=9021, avg=2040.09, stdev=974.51 00:12:42.616 lat (usec): min=120, max=9055, avg=2085.40, stdev=989.32 00:12:42.616 clat percentiles (usec): 00:12:42.616 | 50.000th=[ 1876], 99.000th=[ 4948], 99.900th=[ 6652], 99.990th=[ 7963], 00:12:42.616 | 99.999th=[ 8979] 00:12:42.616 bw ( KiB/s): min=36539, max=51496, per=99.90%, avg=45835.63, stdev=828.20, samples=114 00:12:42.616 iops : min= 9133, max=12874, avg=11457.47, stdev=207.06, samples=114 00:12:42.616 lat (usec) : 100=0.01%, 250=0.29%, 500=1.88%, 750=4.59%, 1000=6.90% 00:12:42.616 lat (msec) : 2=45.39%, 4=37.49%, 10=3.45%, 20=0.01% 00:12:42.616 cpu : usr=46.67%, sys=30.54%, ctx=5193, majf=0, minf=14207 00:12:42.616 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:42.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.616 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.616 issued rwts: total=111499,114715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:42.616 00:12:42.616 Run status group 0 (all jobs): 00:12:42.616 READ: bw=43.5MiB/s (45.7MB/s), 43.5MiB/s-43.5MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:12:42.616 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=448MiB (470MB), run=10001-10001msec 00:12:42.616 ----------------------------------------------------- 00:12:42.616 Suppressions used: 00:12:42.616 count bytes template 00:12:42.616 6 48 /usr/src/fio/parse.c 00:12:42.616 3153 302688 /usr/src/fio/iolog.c 00:12:42.616 1 8 libtcmalloc_minimal.so 00:12:42.616 1 904 libcrypto.so 00:12:42.616 ----------------------------------------------------- 00:12:42.616 00:12:42.616 00:12:42.616 real 0m11.932s 00:12:42.616 user 0m29.584s 00:12:42.616 sys 0m18.683s 00:12:42.616 09:50:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.616 ************************************ 00:12:42.616 END TEST bdev_fio_rw_verify 00:12:42.616 ************************************ 00:12:42.616 09:50:30 -- common/autotest_common.sh@10 -- # set +x 00:12:42.616 09:50:30 -- bdev/blockdev.sh@348 -- # rm -f 00:12:42.616 09:50:30 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.616 09:50:30 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.616 09:50:30 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:42.616 09:50:30 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:42.616 09:50:30 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:42.616 09:50:30 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:42.616 09:50:30 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.616 09:50:30 -- common/autotest_common.sh@1290 -- # cat 00:12:42.616 09:50:30 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:42.616 09:50:30 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:42.616 09:50:30 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:42.617 09:50:30 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e2a5e2e8-7f8c-4a1e-8e46-345b0f0ce52f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e2a5e2e8-7f8c-4a1e-8e46-345b0f0ce52f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5cd725c7-4f0c-40de-a6cd-4f95fdb211ec"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5cd725c7-4f0c-40de-a6cd-4f95fdb211ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "62aeb4d7-2679-4d8a-8fba-b45334d928ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "62aeb4d7-2679-4d8a-8fba-b45334d928ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "e967c824-0e90-4715-8038-861578c5cf16"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e967c824-0e90-4715-8038-861578c5cf16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4d043fb4-0a11-46f8-b581-3097c74bd53c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d043fb4-0a11-46f8-b581-3097c74bd53c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b7dfef40-3df9-40ff-8f80-16a2fcc722e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b7dfef40-3df9-40ff-8f80-16a2fcc722e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:42.617 09:50:30 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:42.617 09:50:30 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.617 /home/vagrant/spdk_repo/spdk 00:12:42.617 ************************************ 00:12:42.617 END TEST bdev_fio 00:12:42.617 ************************************ 00:12:42.617 09:50:30 -- bdev/blockdev.sh@360 -- # popd 00:12:42.617 09:50:30 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:42.617 09:50:30 -- bdev/blockdev.sh@362 -- # return 0 00:12:42.617 00:12:42.617 real 0m12.108s 00:12:42.617 user 0m29.658s 00:12:42.617 sys 0m18.759s 00:12:42.617 09:50:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.617 09:50:30 -- common/autotest_common.sh@10 -- # set +x 00:12:42.617 09:50:31 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:42.617 09:50:31 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:42.617 09:50:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:42.617 09:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.617 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:12:42.617 ************************************ 00:12:42.617 START TEST bdev_verify 00:12:42.617 ************************************ 00:12:42.617 09:50:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:42.617 [2024-12-15 09:50:31.108053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:42.617 [2024-12-15 09:50:31.108191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68403 ] 00:12:42.617 [2024-12-15 09:50:31.262826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:42.617 [2024-12-15 09:50:31.537957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.617 [2024-12-15 09:50:31.538077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.188 Running I/O for 5 seconds... 00:12:48.482 00:12:48.482 Latency(us) 00:12:48.482 [2024-12-15T09:50:37.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0x20000 00:12:48.482 nvme0n1 : 5.09 2281.75 8.91 0.00 0.00 55785.55 15930.29 75416.81 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x20000 length 0x20000 00:12:48.482 nvme0n1 : 5.07 2038.73 7.96 0.00 0.00 62523.01 6074.68 98404.82 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0x80000 00:12:48.482 nvme1n1 : 5.10 2122.58 8.29 0.00 0.00 59917.81 5268.09 85095.98 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x80000 length 0x80000 00:12:48.482 nvme1n1 : 5.09 1950.53 7.62 0.00 0.00 65246.14 14417.92 91952.05 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0x80000 00:12:48.482 nvme1n2 : 5.11 2140.28 8.36 0.00 0.00 59318.31 12351.02 77030.01 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x80000 length 0x80000 00:12:48.482 nvme1n2 : 5.08 1944.29 7.59 0.00 0.00 65302.67 15224.52 105664.20 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0x80000 00:12:48.482 nvme1n3 : 5.08 2191.73 8.56 0.00 0.00 58025.10 4612.73 79449.80 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x80000 length 0x80000 00:12:48.482 nvme1n3 : 5.10 2066.02 8.07 0.00 0.00 61467.04 14720.39 85095.98 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0xbd0bd 00:12:48.482 nvme2n1 : 5.10 2040.90 7.97 0.00 0.00 62147.79 11292.36 79449.80 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:48.482 nvme2n1 : 5.10 1700.78 6.64 0.00 0.00 74405.34 7309.78 112923.57 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0x0 length 0xa0000 00:12:48.482 nvme3n1 : 5.09 2302.65 8.99 0.00 0.00 54987.88 14115.45 81466.29 00:12:48.482 [2024-12-15T09:50:37.498Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:48.482 Verification LBA range: start 0xa0000 length 0xa0000 00:12:48.482 nvme3n1 : 5.10 1971.61 7.70 0.00 0.00 64091.59 5192.47 97598.23 00:12:48.482 [2024-12-15T09:50:37.498Z] =================================================================================================================== 00:12:48.482 [2024-12-15T09:50:37.498Z] Total : 24751.86 96.69 0.00 0.00 61559.74 4612.73 112923.57 00:12:49.427 00:12:49.427 real 0m7.089s 00:12:49.427 user 0m8.947s 00:12:49.427 sys 0m3.090s 00:12:49.427 09:50:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.427 ************************************ 00:12:49.427 09:50:38 -- common/autotest_common.sh@10 -- # set +x 00:12:49.427 END TEST bdev_verify 00:12:49.427 ************************************ 00:12:49.427 09:50:38 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.427 09:50:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:49.427 09:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.427 09:50:38 -- common/autotest_common.sh@10 -- # set +x 00:12:49.427 ************************************ 00:12:49.427 START TEST bdev_verify_big_io 00:12:49.427 ************************************ 00:12:49.427 09:50:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:49.427 [2024-12-15 09:50:38.268949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:49.427 [2024-12-15 09:50:38.269107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68507 ] 00:12:49.427 [2024-12-15 09:50:38.422951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:49.688 [2024-12-15 09:50:38.680297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.688 [2024-12-15 09:50:38.680304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.262 Running I/O for 5 seconds... 00:12:56.853 00:12:56.853 Latency(us) 00:12:56.853 [2024-12-15T09:50:45.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0x2000 00:12:56.853 nvme0n1 : 5.55 235.15 14.70 0.00 0.00 538024.27 29440.79 693673.35 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x2000 length 0x2000 00:12:56.853 nvme0n1 : 5.52 283.94 17.75 0.00 0.00 431835.88 72593.72 813049.70 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0x8000 00:12:56.853 nvme1n1 : 5.54 251.21 15.70 0.00 0.00 495162.77 24802.86 683994.19 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x8000 length 0x8000 00:12:56.853 nvme1n1 : 5.58 280.91 17.56 0.00 0.00 431910.26 124215.93 771106.66 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0x8000 00:12:56.853 nvme1n2 : 5.55 203.52 12.72 0.00 0.00 599397.69 30045.74 764653.88 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x8000 length 0x8000 00:12:56.853 nvme1n2 : 5.61 279.68 17.48 0.00 0.00 429641.90 86709.17 690446.97 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0x8000 00:12:56.853 nvme1n3 : 5.54 251.06 15.69 0.00 0.00 480929.39 20568.22 590428.95 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x8000 length 0x8000 00:12:56.853 nvme1n3 : 5.62 295.96 18.50 0.00 0.00 400276.21 36700.16 538806.74 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0xbd0b 00:12:56.853 nvme2n1 : 5.55 266.35 16.65 0.00 0.00 447902.10 26012.75 477505.38 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:56.853 nvme2n1 : 5.62 339.94 21.25 0.00 0.00 338567.10 24097.08 722710.84 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0x0 length 0xa000 00:12:56.853 nvme3n1 : 5.55 250.51 15.66 0.00 0.00 467975.62 7965.14 590428.95 00:12:56.853 [2024-12-15T09:50:45.869Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:56.853 Verification LBA range: start 0xa000 length 0xa000 00:12:56.853 nvme3n1 : 5.68 398.22 24.89 0.00 0.00 281213.18 1506.07 395232.49 00:12:56.853 [2024-12-15T09:50:45.869Z] =================================================================================================================== 00:12:56.853 [2024-12-15T09:50:45.869Z] Total : 3336.45 208.53 0.00 0.00 430964.35 1506.07 813049.70 00:12:57.115 00:12:57.115 real 0m7.830s 00:12:57.115 user 0m13.778s 00:12:57.115 sys 0m0.705s 00:12:57.115 09:50:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.115 ************************************ 00:12:57.115 END TEST bdev_verify_big_io 00:12:57.115 ************************************ 00:12:57.115 09:50:46 -- common/autotest_common.sh@10 -- # set +x 00:12:57.115 09:50:46 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.115 09:50:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:57.115 09:50:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.115 09:50:46 -- common/autotest_common.sh@10 -- # set +x 00:12:57.115 ************************************ 00:12:57.115 START TEST bdev_write_zeroes 00:12:57.115 ************************************ 00:12:57.115 09:50:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.376 [2024-12-15 09:50:46.171392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:57.376 [2024-12-15 09:50:46.171525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68620 ] 00:12:57.376 [2024-12-15 09:50:46.325827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.638 [2024-12-15 09:50:46.549113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.209 Running I/O for 1 seconds... 00:12:59.154 00:12:59.154 Latency(us) 00:12:59.154 [2024-12-15T09:50:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme0n1 : 1.02 11839.11 46.25 0.00 0.00 10800.95 9023.80 23290.49 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme1n1 : 1.01 11834.83 46.23 0.00 0.00 10795.98 9023.80 23693.78 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme1n2 : 1.01 11820.39 46.17 0.00 0.00 10799.63 9023.80 24097.08 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme1n3 : 1.01 11806.27 46.12 0.00 0.00 10800.13 9023.80 24500.38 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme2n1 : 1.01 12390.81 48.40 0.00 0.00 10279.79 4889.99 18551.73 00:12:59.154 [2024-12-15T09:50:48.170Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.154 nvme3n1 : 1.02 11805.44 46.12 0.00 0.00 10728.27 4637.93 25004.50 00:12:59.154 [2024-12-15T09:50:48.170Z] =================================================================================================================== 00:12:59.154 [2024-12-15T09:50:48.170Z] Total : 71496.84 279.28 0.00 0.00 10697.11 4637.93 25004.50 00:13:00.097 ************************************ 00:13:00.097 END TEST bdev_write_zeroes 00:13:00.097 ************************************ 00:13:00.097 00:13:00.097 real 0m2.776s 00:13:00.097 user 0m2.086s 00:13:00.097 sys 0m0.510s 00:13:00.097 09:50:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.097 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:00.097 09:50:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.097 09:50:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.097 09:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.097 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:00.097 ************************************ 00:13:00.097 START TEST bdev_json_nonenclosed 00:13:00.097 ************************************ 00:13:00.097 09:50:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.097 [2024-12-15 09:50:49.023120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.097 [2024-12-15 09:50:49.023504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68662 ] 00:13:00.358 [2024-12-15 09:50:49.177991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.619 [2024-12-15 09:50:49.395779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.619 [2024-12-15 09:50:49.395968] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.619 [2024-12-15 09:50:49.395993] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.880 ************************************ 00:13:00.880 END TEST bdev_json_nonenclosed 00:13:00.880 ************************************ 00:13:00.880 00:13:00.880 real 0m0.752s 00:13:00.880 user 0m0.509s 00:13:00.880 sys 0m0.136s 00:13:00.880 09:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.880 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:00.880 09:50:49 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.880 09:50:49 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.880 09:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.880 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:13:00.880 ************************************ 00:13:00.880 START TEST bdev_json_nonarray 00:13:00.880 ************************************ 00:13:00.880 09:50:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.880 [2024-12-15 09:50:49.841027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.880 [2024-12-15 09:50:49.841387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68693 ] 00:13:01.142 [2024-12-15 09:50:49.986972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.403 [2024-12-15 09:50:50.225965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.403 [2024-12-15 09:50:50.226165] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:01.403 [2024-12-15 09:50:50.226185] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.664 00:13:01.664 real 0m0.769s 00:13:01.664 user 0m0.535s 00:13:01.664 sys 0m0.126s 00:13:01.664 09:50:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.664 ************************************ 00:13:01.664 09:50:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.664 END TEST bdev_json_nonarray 00:13:01.664 ************************************ 00:13:01.664 09:50:50 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:01.664 09:50:50 -- bdev/blockdev.sh@809 -- # cleanup 00:13:01.664 09:50:50 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:01.664 09:50:50 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:01.664 09:50:50 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:01.664 09:50:50 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:04.526 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:04.526 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.915 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.915 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.915 00:13:05.915 real 0m57.589s 00:13:05.915 user 1m23.922s 00:13:05.915 sys 0m34.964s 00:13:05.915 09:50:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.915 ************************************ 00:13:05.915 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.915 END TEST blockdev_xnvme 00:13:05.915 ************************************ 00:13:06.177 09:50:54 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:06.177 09:50:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:06.177 09:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.177 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:06.177 ************************************ 00:13:06.177 START TEST ublk 00:13:06.177 ************************************ 00:13:06.177 09:50:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:06.177 * Looking for test storage... 00:13:06.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:06.177 09:50:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:06.177 09:50:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:06.177 09:50:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:06.177 09:50:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:06.177 09:50:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:06.177 09:50:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:06.177 09:50:55 -- scripts/common.sh@335 -- # IFS=.-: 00:13:06.177 09:50:55 -- scripts/common.sh@335 -- # read -ra ver1 00:13:06.177 09:50:55 -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.177 09:50:55 -- scripts/common.sh@336 -- # read -ra ver2 00:13:06.177 09:50:55 -- scripts/common.sh@337 -- # local 'op=<' 00:13:06.177 09:50:55 -- scripts/common.sh@339 -- # ver1_l=2 00:13:06.177 09:50:55 -- scripts/common.sh@340 -- # ver2_l=1 00:13:06.177 09:50:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:06.177 09:50:55 -- scripts/common.sh@343 -- # case "$op" in 00:13:06.177 09:50:55 -- scripts/common.sh@344 -- # : 1 00:13:06.177 09:50:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:06.177 09:50:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.177 09:50:55 -- scripts/common.sh@364 -- # decimal 1 00:13:06.177 09:50:55 -- scripts/common.sh@352 -- # local d=1 00:13:06.177 09:50:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.177 09:50:55 -- scripts/common.sh@354 -- # echo 1 00:13:06.177 09:50:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:06.177 09:50:55 -- scripts/common.sh@365 -- # decimal 2 00:13:06.177 09:50:55 -- scripts/common.sh@352 -- # local d=2 00:13:06.177 09:50:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.177 09:50:55 -- scripts/common.sh@354 -- # echo 2 00:13:06.177 09:50:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:06.177 09:50:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:06.177 09:50:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:06.177 09:50:55 -- scripts/common.sh@367 -- # return 0 00:13:06.177 09:50:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:06.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.177 --rc genhtml_branch_coverage=1 00:13:06.177 --rc genhtml_function_coverage=1 00:13:06.177 --rc genhtml_legend=1 00:13:06.177 --rc geninfo_all_blocks=1 00:13:06.177 --rc geninfo_unexecuted_blocks=1 00:13:06.177 00:13:06.177 ' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:06.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.177 --rc genhtml_branch_coverage=1 00:13:06.177 --rc genhtml_function_coverage=1 00:13:06.177 --rc genhtml_legend=1 00:13:06.177 --rc geninfo_all_blocks=1 00:13:06.177 --rc geninfo_unexecuted_blocks=1 00:13:06.177 00:13:06.177 ' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:06.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.177 --rc genhtml_branch_coverage=1 00:13:06.177 --rc genhtml_function_coverage=1 00:13:06.177 --rc genhtml_legend=1 00:13:06.177 --rc geninfo_all_blocks=1 00:13:06.177 --rc geninfo_unexecuted_blocks=1 00:13:06.177 00:13:06.177 ' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:06.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.177 --rc genhtml_branch_coverage=1 00:13:06.177 --rc genhtml_function_coverage=1 00:13:06.177 --rc genhtml_legend=1 00:13:06.177 --rc geninfo_all_blocks=1 00:13:06.177 --rc geninfo_unexecuted_blocks=1 00:13:06.177 00:13:06.177 ' 00:13:06.177 09:50:55 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:06.177 09:50:55 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:06.177 09:50:55 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:06.177 09:50:55 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:06.177 09:50:55 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:06.177 09:50:55 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:06.177 09:50:55 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:06.177 09:50:55 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:06.177 09:50:55 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:06.177 09:50:55 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:06.177 09:50:55 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:06.177 09:50:55 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:06.177 09:50:55 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:06.177 09:50:55 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:06.177 09:50:55 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:06.177 09:50:55 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:06.177 09:50:55 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:06.177 09:50:55 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:06.177 09:50:55 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:06.177 09:50:55 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:06.177 09:50:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:06.177 09:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.177 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.177 ************************************ 00:13:06.177 START TEST test_save_ublk_config 00:13:06.177 ************************************ 00:13:06.177 09:50:55 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.177 09:50:55 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:06.177 09:50:55 -- ublk/ublk.sh@103 -- # tgtpid=69007 00:13:06.177 09:50:55 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:06.177 09:50:55 -- ublk/ublk.sh@106 -- # waitforlisten 69007 00:13:06.177 09:50:55 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:06.177 09:50:55 -- common/autotest_common.sh@829 -- # '[' -z 69007 ']' 00:13:06.177 09:50:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.177 09:50:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.177 09:50:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.178 09:50:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.178 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.438 [2024-12-15 09:50:55.220645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:06.438 [2024-12-15 09:50:55.221032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69007 ] 00:13:06.438 [2024-12-15 09:50:55.374874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.698 [2024-12-15 09:50:55.607768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.698 [2024-12-15 09:50:55.608228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.085 09:50:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.085 09:50:56 -- common/autotest_common.sh@862 -- # return 0 00:13:08.085 09:50:56 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:08.085 09:50:56 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:08.085 09:50:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.085 09:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:08.085 [2024-12-15 09:50:56.759109] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:08.085 malloc0 00:13:08.085 [2024-12-15 09:50:56.830407] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:08.085 [2024-12-15 09:50:56.830502] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:08.085 [2024-12-15 09:50:56.830511] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:08.085 [2024-12-15 09:50:56.830521] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:08.085 [2024-12-15 09:50:56.839373] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:08.085 [2024-12-15 09:50:56.839407] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:08.085 [2024-12-15 09:50:56.846284] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:08.085 [2024-12-15 09:50:56.846403] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:08.085 [2024-12-15 09:50:56.863290] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:08.085 0 00:13:08.085 09:50:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.085 09:50:56 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:08.085 09:50:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.085 09:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:08.346 09:50:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.346 09:50:57 -- ublk/ublk.sh@115 -- # config='{ 00:13:08.346 "subsystems": [ 00:13:08.346 { 00:13:08.346 "subsystem": "iobuf", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "iobuf_set_options", 00:13:08.346 "params": { 00:13:08.346 "small_pool_count": 8192, 00:13:08.346 "large_pool_count": 1024, 00:13:08.346 "small_bufsize": 8192, 00:13:08.346 "large_bufsize": 135168 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "sock", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "sock_impl_set_options", 00:13:08.346 "params": { 00:13:08.346 "impl_name": "posix", 00:13:08.346 "recv_buf_size": 2097152, 00:13:08.346 "send_buf_size": 2097152, 00:13:08.346 "enable_recv_pipe": true, 00:13:08.346 "enable_quickack": false, 00:13:08.346 "enable_placement_id": 0, 00:13:08.346 "enable_zerocopy_send_server": true, 00:13:08.346 "enable_zerocopy_send_client": false, 00:13:08.346 "zerocopy_threshold": 0, 00:13:08.346 "tls_version": 0, 00:13:08.346 "enable_ktls": false 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "sock_impl_set_options", 00:13:08.346 "params": { 00:13:08.346 "impl_name": "ssl", 00:13:08.346 "recv_buf_size": 4096, 00:13:08.346 "send_buf_size": 4096, 00:13:08.346 "enable_recv_pipe": true, 00:13:08.346 "enable_quickack": false, 00:13:08.346 "enable_placement_id": 0, 00:13:08.346 "enable_zerocopy_send_server": true, 00:13:08.346 "enable_zerocopy_send_client": false, 00:13:08.346 "zerocopy_threshold": 0, 00:13:08.346 "tls_version": 0, 00:13:08.346 "enable_ktls": false 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "vmd", 00:13:08.346 "config": [] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "accel", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "accel_set_options", 00:13:08.346 "params": { 00:13:08.346 "small_cache_size": 128, 00:13:08.346 "large_cache_size": 16, 00:13:08.346 "task_count": 2048, 00:13:08.346 "sequence_count": 2048, 00:13:08.346 "buf_count": 2048 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "bdev", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "bdev_set_options", 00:13:08.346 "params": { 00:13:08.346 "bdev_io_pool_size": 65535, 00:13:08.346 "bdev_io_cache_size": 256, 00:13:08.346 "bdev_auto_examine": true, 00:13:08.346 "iobuf_small_cache_size": 128, 00:13:08.346 "iobuf_large_cache_size": 16 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_raid_set_options", 00:13:08.346 "params": { 00:13:08.346 "process_window_size_kb": 1024 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_iscsi_set_options", 00:13:08.346 "params": { 00:13:08.346 "timeout_sec": 30 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_nvme_set_options", 00:13:08.346 "params": { 00:13:08.346 "action_on_timeout": "none", 00:13:08.346 "timeout_us": 0, 00:13:08.346 "timeout_admin_us": 0, 00:13:08.346 "keep_alive_timeout_ms": 10000, 00:13:08.346 "transport_retry_count": 4, 00:13:08.346 "arbitration_burst": 0, 00:13:08.346 "low_priority_weight": 0, 00:13:08.346 "medium_priority_weight": 0, 00:13:08.346 "high_priority_weight": 0, 00:13:08.346 "nvme_adminq_poll_period_us": 10000, 00:13:08.346 "nvme_ioq_poll_period_us": 0, 00:13:08.346 "io_queue_requests": 0, 00:13:08.346 "delay_cmd_submit": true, 00:13:08.346 "bdev_retry_count": 3, 00:13:08.346 "transport_ack_timeout": 0, 00:13:08.346 "ctrlr_loss_timeout_sec": 0, 00:13:08.346 "reconnect_delay_sec": 0, 00:13:08.346 "fast_io_fail_timeout_sec": 0, 00:13:08.346 "generate_uuids": false, 00:13:08.346 "transport_tos": 0, 00:13:08.346 "io_path_stat": false, 00:13:08.346 "allow_accel_sequence": false 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_nvme_set_hotplug", 00:13:08.346 "params": { 00:13:08.346 "period_us": 100000, 00:13:08.346 "enable": false 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_malloc_create", 00:13:08.346 "params": { 00:13:08.346 "name": "malloc0", 00:13:08.346 "num_blocks": 8192, 00:13:08.346 "block_size": 4096, 00:13:08.346 "physical_block_size": 4096, 00:13:08.346 "uuid": "1c0348a1-970b-47c1-8147-14cdf944c3a7", 00:13:08.346 "optimal_io_boundary": 0 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "bdev_wait_for_examine" 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "scsi", 00:13:08.346 "config": null 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "scheduler", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "framework_set_scheduler", 00:13:08.346 "params": { 00:13:08.346 "name": "static" 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "vhost_scsi", 00:13:08.346 "config": [] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "vhost_blk", 00:13:08.346 "config": [] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "ublk", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "ublk_create_target", 00:13:08.346 "params": { 00:13:08.346 "cpumask": "1" 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "ublk_start_disk", 00:13:08.346 "params": { 00:13:08.346 "bdev_name": "malloc0", 00:13:08.346 "ublk_id": 0, 00:13:08.346 "num_queues": 1, 00:13:08.346 "queue_depth": 128 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "nbd", 00:13:08.346 "config": [] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "nvmf", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "nvmf_set_config", 00:13:08.346 "params": { 00:13:08.346 "discovery_filter": "match_any", 00:13:08.346 "admin_cmd_passthru": { 00:13:08.346 "identify_ctrlr": false 00:13:08.346 } 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "nvmf_set_max_subsystems", 00:13:08.346 "params": { 00:13:08.346 "max_subsystems": 1024 00:13:08.346 } 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "method": "nvmf_set_crdt", 00:13:08.346 "params": { 00:13:08.346 "crdt1": 0, 00:13:08.346 "crdt2": 0, 00:13:08.346 "crdt3": 0 00:13:08.346 } 00:13:08.346 } 00:13:08.346 ] 00:13:08.346 }, 00:13:08.346 { 00:13:08.346 "subsystem": "iscsi", 00:13:08.346 "config": [ 00:13:08.346 { 00:13:08.346 "method": "iscsi_set_options", 00:13:08.346 "params": { 00:13:08.346 "node_base": "iqn.2016-06.io.spdk", 00:13:08.346 "max_sessions": 128, 00:13:08.346 "max_connections_per_session": 2, 00:13:08.346 "max_queue_depth": 64, 00:13:08.346 "default_time2wait": 2, 00:13:08.346 "default_time2retain": 20, 00:13:08.346 "first_burst_length": 8192, 00:13:08.346 "immediate_data": true, 00:13:08.346 "allow_duplicated_isid": false, 00:13:08.346 "error_recovery_level": 0, 00:13:08.346 "nop_timeout": 60, 00:13:08.346 "nop_in_interval": 30, 00:13:08.346 "disable_chap": false, 00:13:08.346 "require_chap": false, 00:13:08.346 "mutual_chap": false, 00:13:08.346 "chap_group": 0, 00:13:08.346 "max_large_datain_per_connection": 64, 00:13:08.346 "max_r2t_per_connection": 4, 00:13:08.346 "pdu_pool_size": 36864, 00:13:08.346 "immediate_data_pool_size": 16384, 00:13:08.347 "data_out_pool_size": 2048 00:13:08.347 } 00:13:08.347 } 00:13:08.347 ] 00:13:08.347 } 00:13:08.347 ] 00:13:08.347 }' 00:13:08.347 09:50:57 -- ublk/ublk.sh@116 -- # killprocess 69007 00:13:08.347 09:50:57 -- common/autotest_common.sh@936 -- # '[' -z 69007 ']' 00:13:08.347 09:50:57 -- common/autotest_common.sh@940 -- # kill -0 69007 00:13:08.347 09:50:57 -- common/autotest_common.sh@941 -- # uname 00:13:08.347 09:50:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:08.347 09:50:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69007 00:13:08.347 killing process with pid 69007 00:13:08.347 09:50:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:08.347 09:50:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:08.347 09:50:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69007' 00:13:08.347 09:50:57 -- common/autotest_common.sh@955 -- # kill 69007 00:13:08.347 09:50:57 -- common/autotest_common.sh@960 -- # wait 69007 00:13:09.290 [2024-12-15 09:50:58.261541] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:09.551 [2024-12-15 09:50:58.305305] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:09.551 [2024-12-15 09:50:58.305449] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:09.551 [2024-12-15 09:50:58.309543] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:09.551 [2024-12-15 09:50:58.309610] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:09.551 [2024-12-15 09:50:58.309625] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:09.551 [2024-12-15 09:50:58.309656] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:09.551 [2024-12-15 09:50:58.309802] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:10.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.933 09:50:59 -- ublk/ublk.sh@119 -- # tgtpid=69069 00:13:10.933 09:50:59 -- ublk/ublk.sh@121 -- # waitforlisten 69069 00:13:10.933 09:50:59 -- common/autotest_common.sh@829 -- # '[' -z 69069 ']' 00:13:10.933 09:50:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.933 09:50:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.933 09:50:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.933 09:50:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.933 09:50:59 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:10.933 09:50:59 -- common/autotest_common.sh@10 -- # set +x 00:13:10.933 09:50:59 -- ublk/ublk.sh@118 -- # echo '{ 00:13:10.933 "subsystems": [ 00:13:10.933 { 00:13:10.933 "subsystem": "iobuf", 00:13:10.933 "config": [ 00:13:10.933 { 00:13:10.933 "method": "iobuf_set_options", 00:13:10.933 "params": { 00:13:10.933 "small_pool_count": 8192, 00:13:10.933 "large_pool_count": 1024, 00:13:10.933 "small_bufsize": 8192, 00:13:10.933 "large_bufsize": 135168 00:13:10.933 } 00:13:10.933 } 00:13:10.933 ] 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "subsystem": "sock", 00:13:10.933 "config": [ 00:13:10.933 { 00:13:10.933 "method": "sock_impl_set_options", 00:13:10.933 "params": { 00:13:10.933 "impl_name": "posix", 00:13:10.933 "recv_buf_size": 2097152, 00:13:10.933 "send_buf_size": 2097152, 00:13:10.933 "enable_recv_pipe": true, 00:13:10.933 "enable_quickack": false, 00:13:10.933 "enable_placement_id": 0, 00:13:10.933 "enable_zerocopy_send_server": true, 00:13:10.933 "enable_zerocopy_send_client": false, 00:13:10.933 "zerocopy_threshold": 0, 00:13:10.933 "tls_version": 0, 00:13:10.933 "enable_ktls": false 00:13:10.933 } 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "method": "sock_impl_set_options", 00:13:10.933 "params": { 00:13:10.933 "impl_name": "ssl", 00:13:10.933 "recv_buf_size": 4096, 00:13:10.933 "send_buf_size": 4096, 00:13:10.933 "enable_recv_pipe": true, 00:13:10.933 "enable_quickack": false, 00:13:10.933 "enable_placement_id": 0, 00:13:10.933 "enable_zerocopy_send_server": true, 00:13:10.933 "enable_zerocopy_send_client": false, 00:13:10.933 "zerocopy_threshold": 0, 00:13:10.933 "tls_version": 0, 00:13:10.933 "enable_ktls": false 00:13:10.933 } 00:13:10.933 } 00:13:10.933 ] 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "subsystem": "vmd", 00:13:10.933 "config": [] 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "subsystem": "accel", 00:13:10.933 "config": [ 00:13:10.933 { 00:13:10.933 "method": "accel_set_options", 00:13:10.933 "params": { 00:13:10.933 "small_cache_size": 128, 00:13:10.933 "large_cache_size": 16, 00:13:10.933 "task_count": 2048, 00:13:10.933 "sequence_count": 2048, 00:13:10.933 "buf_count": 2048 00:13:10.933 } 00:13:10.933 } 00:13:10.933 ] 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "subsystem": "bdev", 00:13:10.933 "config": [ 00:13:10.933 { 00:13:10.933 "method": "bdev_set_options", 00:13:10.933 "params": { 00:13:10.933 "bdev_io_pool_size": 65535, 00:13:10.933 "bdev_io_cache_size": 256, 00:13:10.933 "bdev_auto_examine": true, 00:13:10.933 "iobuf_small_cache_size": 128, 00:13:10.933 "iobuf_large_cache_size": 16 00:13:10.933 } 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "method": "bdev_raid_set_options", 00:13:10.933 "params": { 00:13:10.933 "process_window_size_kb": 1024 00:13:10.933 } 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "method": "bdev_iscsi_set_options", 00:13:10.933 "params": { 00:13:10.933 "timeout_sec": 30 00:13:10.933 } 00:13:10.933 }, 00:13:10.933 { 00:13:10.933 "method": "bdev_nvme_set_options", 00:13:10.933 "params": { 00:13:10.933 "action_on_timeout": "none", 00:13:10.933 "timeout_us": 0, 00:13:10.934 "timeout_admin_us": 0, 00:13:10.934 "keep_alive_timeout_ms": 10000, 00:13:10.934 "transport_retry_count": 4, 00:13:10.934 "arbitration_burst": 0, 00:13:10.934 "low_priority_weight": 0, 00:13:10.934 "medium_priority_weight": 0, 00:13:10.934 "high_priority_weight": 0, 00:13:10.934 "nvme_adminq_poll_period_us": 10000, 00:13:10.934 "nvme_ioq_poll_period_us": 0, 00:13:10.934 "io_queue_requests": 0, 00:13:10.934 "delay_cmd_submit": true, 00:13:10.934 "bdev_retry_count": 3, 00:13:10.934 "transport_ack_timeout": 0, 00:13:10.934 "ctrlr_loss_timeout_sec": 0, 00:13:10.934 "reconnect_delay_sec": 0, 00:13:10.934 "fast_io_fail_timeout_sec": 0, 00:13:10.934 "generate_uuids": false, 00:13:10.934 "transport_tos": 0, 00:13:10.934 "io_path_stat": false, 00:13:10.934 "allow_accel_sequence": false 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "bdev_nvme_set_hotplug", 00:13:10.934 "params": { 00:13:10.934 "period_us": 100000, 00:13:10.934 "enable": false 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "bdev_malloc_create", 00:13:10.934 "params": { 00:13:10.934 "name": "malloc0", 00:13:10.934 "num_blocks": 8192, 00:13:10.934 "block_size": 4096, 00:13:10.934 "physical_block_size": 4096, 00:13:10.934 "uuid": "1c0348a1-970b-47c1-8147-14cdf944c3a7", 00:13:10.934 "optimal_io_boundary": 0 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "bdev_wait_for_examine" 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "scsi", 00:13:10.934 "config": null 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "scheduler", 00:13:10.934 "config": [ 00:13:10.934 { 00:13:10.934 "method": "framework_set_scheduler", 00:13:10.934 "params": { 00:13:10.934 "name": "static" 00:13:10.934 } 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "vhost_scsi", 00:13:10.934 "config": [] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "vhost_blk", 00:13:10.934 "config": [] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "ublk", 00:13:10.934 "config": [ 00:13:10.934 { 00:13:10.934 "method": "ublk_create_target", 00:13:10.934 "params": { 00:13:10.934 "cpumask": "1" 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "ublk_start_disk", 00:13:10.934 "params": { 00:13:10.934 "bdev_name": "malloc0", 00:13:10.934 "ublk_id": 0, 00:13:10.934 "num_queues": 1, 00:13:10.934 "queue_depth": 128 00:13:10.934 } 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "nbd", 00:13:10.934 "config": [] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "nvmf", 00:13:10.934 "config": [ 00:13:10.934 { 00:13:10.934 "method": "nvmf_set_config", 00:13:10.934 "params": { 00:13:10.934 "discovery_filter": "match_any", 00:13:10.934 "admin_cmd_passthru": { 00:13:10.934 "identify_ctrlr": false 00:13:10.934 } 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "nvmf_set_max_subsystems", 00:13:10.934 "params": { 00:13:10.934 "max_subsystems": 1024 00:13:10.934 } 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "method": "nvmf_set_crdt", 00:13:10.934 "params": { 00:13:10.934 "crdt1": 0, 00:13:10.934 "crdt2": 0, 00:13:10.934 "crdt3": 0 00:13:10.934 } 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 }, 00:13:10.934 { 00:13:10.934 "subsystem": "iscsi", 00:13:10.934 "config": [ 00:13:10.934 { 00:13:10.934 "method": "iscsi_set_options", 00:13:10.934 "params": { 00:13:10.934 "node_base": "iqn.2016-06.io.spdk", 00:13:10.934 "max_sessions": 128, 00:13:10.934 "max_connections_per_session": 2, 00:13:10.934 "max_queue_depth": 64, 00:13:10.934 "default_time2wait": 2, 00:13:10.934 "default_time2retain": 20, 00:13:10.934 "first_burst_length": 8192, 00:13:10.934 "immediate_data": true, 00:13:10.934 "allow_duplicated_isid": false, 00:13:10.934 "error_recovery_level": 0, 00:13:10.934 "nop_timeout": 60, 00:13:10.934 "nop_in_interval": 30, 00:13:10.934 "disable_chap": false, 00:13:10.934 "require_chap": false, 00:13:10.934 "mutual_chap": false, 00:13:10.934 "chap_group": 0, 00:13:10.934 "max_large_datain_per_connection": 64, 00:13:10.934 "max_r2t_per_connection": 4, 00:13:10.934 "pdu_pool_size": 36864, 00:13:10.934 "immediate_data_pool_size": 16384, 00:13:10.934 "data_out_pool_size": 2048 00:13:10.934 } 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 } 00:13:10.934 ] 00:13:10.934 }' 00:13:10.934 [2024-12-15 09:50:59.712028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:10.934 [2024-12-15 09:50:59.712315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69069 ] 00:13:10.934 [2024-12-15 09:50:59.860411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.193 [2024-12-15 09:51:00.032197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.193 [2024-12-15 09:51:00.032402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.822 [2024-12-15 09:51:00.701079] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:11.822 [2024-12-15 09:51:00.708401] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:11.822 [2024-12-15 09:51:00.708492] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:11.822 [2024-12-15 09:51:00.708500] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:11.822 [2024-12-15 09:51:00.708508] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:11.822 [2024-12-15 09:51:00.717395] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:11.822 [2024-12-15 09:51:00.717425] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:11.822 [2024-12-15 09:51:00.724291] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:11.822 [2024-12-15 09:51:00.724408] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:11.822 [2024-12-15 09:51:00.741287] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:12.394 09:51:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.394 09:51:01 -- common/autotest_common.sh@862 -- # return 0 00:13:12.394 09:51:01 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:12.394 09:51:01 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:12.394 09:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.394 09:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:12.394 09:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.394 09:51:01 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:12.394 09:51:01 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:12.394 09:51:01 -- ublk/ublk.sh@125 -- # killprocess 69069 00:13:12.394 09:51:01 -- common/autotest_common.sh@936 -- # '[' -z 69069 ']' 00:13:12.394 09:51:01 -- common/autotest_common.sh@940 -- # kill -0 69069 00:13:12.394 09:51:01 -- common/autotest_common.sh@941 -- # uname 00:13:12.394 09:51:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.394 09:51:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69069 00:13:12.394 killing process with pid 69069 00:13:12.394 09:51:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.394 09:51:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.394 09:51:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69069' 00:13:12.394 09:51:01 -- common/autotest_common.sh@955 -- # kill 69069 00:13:12.394 09:51:01 -- common/autotest_common.sh@960 -- # wait 69069 00:13:13.329 [2024-12-15 09:51:02.097797] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:13.329 [2024-12-15 09:51:02.127335] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:13.329 [2024-12-15 09:51:02.127425] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:13.329 [2024-12-15 09:51:02.136280] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:13.329 [2024-12-15 09:51:02.136319] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:13.329 [2024-12-15 09:51:02.136325] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:13.329 [2024-12-15 09:51:02.136346] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:13.329 [2024-12-15 09:51:02.136454] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:14.703 09:51:03 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:14.703 00:13:14.703 real 0m8.328s 00:13:14.703 user 0m6.229s 00:13:14.703 sys 0m3.071s 00:13:14.704 09:51:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:14.704 09:51:03 -- common/autotest_common.sh@10 -- # set +x 00:13:14.704 ************************************ 00:13:14.704 END TEST test_save_ublk_config 00:13:14.704 ************************************ 00:13:14.704 09:51:03 -- ublk/ublk.sh@139 -- # spdk_pid=69144 00:13:14.704 09:51:03 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:14.704 09:51:03 -- ublk/ublk.sh@141 -- # waitforlisten 69144 00:13:14.704 09:51:03 -- common/autotest_common.sh@829 -- # '[' -z 69144 ']' 00:13:14.704 09:51:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.704 09:51:03 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:14.704 09:51:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.704 09:51:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.704 09:51:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.704 09:51:03 -- common/autotest_common.sh@10 -- # set +x 00:13:14.704 [2024-12-15 09:51:03.579265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:14.704 [2024-12-15 09:51:03.579371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69144 ] 00:13:14.961 [2024-12-15 09:51:03.727606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.961 [2024-12-15 09:51:03.868326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.961 [2024-12-15 09:51:03.868730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.961 [2024-12-15 09:51:03.868785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.528 09:51:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.528 09:51:04 -- common/autotest_common.sh@862 -- # return 0 00:13:15.528 09:51:04 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:15.528 09:51:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:15.528 09:51:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.528 09:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.528 ************************************ 00:13:15.528 START TEST test_create_ublk 00:13:15.528 ************************************ 00:13:15.528 09:51:04 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:15.528 09:51:04 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:15.528 09:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.528 09:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.528 [2024-12-15 09:51:04.327739] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:15.528 09:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.528 09:51:04 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:15.528 09:51:04 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:15.528 09:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.528 09:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.528 09:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.528 09:51:04 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:15.528 09:51:04 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:15.528 09:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.528 09:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.528 [2024-12-15 09:51:04.486381] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:15.528 [2024-12-15 09:51:04.486681] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:15.528 [2024-12-15 09:51:04.486692] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:15.528 [2024-12-15 09:51:04.486700] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:15.528 [2024-12-15 09:51:04.495460] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:15.528 [2024-12-15 09:51:04.495479] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:15.528 [2024-12-15 09:51:04.502281] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:15.528 [2024-12-15 09:51:04.514436] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:15.528 [2024-12-15 09:51:04.529346] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:15.528 09:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.528 09:51:04 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:15.528 09:51:04 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:15.528 09:51:04 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:15.528 09:51:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.528 09:51:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.786 09:51:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:15.786 { 00:13:15.786 "ublk_device": "/dev/ublkb0", 00:13:15.786 "id": 0, 00:13:15.786 "queue_depth": 512, 00:13:15.786 "num_queues": 4, 00:13:15.786 "bdev_name": "Malloc0" 00:13:15.786 } 00:13:15.786 ]' 00:13:15.786 09:51:04 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:15.786 09:51:04 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:15.786 09:51:04 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:15.786 09:51:04 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:15.786 09:51:04 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:15.786 09:51:04 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:15.786 09:51:04 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:15.786 09:51:04 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:15.786 09:51:04 -- lvol/common.sh@41 -- # local offset=0 00:13:15.786 09:51:04 -- lvol/common.sh@42 -- # local size=134217728 00:13:15.786 09:51:04 -- lvol/common.sh@43 -- # local rw=write 00:13:15.786 09:51:04 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:15.786 09:51:04 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:15.786 09:51:04 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:15.786 09:51:04 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:15.786 09:51:04 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:15.786 09:51:04 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:15.786 09:51:04 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:16.044 fio: verification read phase will never start because write phase uses all of runtime 00:13:16.044 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:16.044 fio-3.35 00:13:16.044 Starting 1 process 00:13:26.006 00:13:26.006 fio_test: (groupid=0, jobs=1): err= 0: pid=69188: Sun Dec 15 09:51:14 2024 00:13:26.006 write: IOPS=15.2k, BW=59.5MiB/s (62.3MB/s)(595MiB/10001msec); 0 zone resets 00:13:26.006 clat (usec): min=44, max=4045, avg=65.01, stdev=93.98 00:13:26.006 lat (usec): min=44, max=4045, avg=65.40, stdev=93.99 00:13:26.006 clat percentiles (usec): 00:13:26.006 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 58], 00:13:26.006 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:13:26.006 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 69], 95.00th=[ 72], 00:13:26.006 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 1893], 99.95th=[ 2769], 00:13:26.006 | 99.99th=[ 3490] 00:13:26.006 bw ( KiB/s): min=59352, max=62864, per=100.00%, avg=60986.95, stdev=977.06, samples=19 00:13:26.006 iops : min=14838, max=15716, avg=15246.74, stdev=244.27, samples=19 00:13:26.006 lat (usec) : 50=1.51%, 100=98.17%, 250=0.13%, 500=0.02%, 750=0.01% 00:13:26.006 lat (usec) : 1000=0.02% 00:13:26.006 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:13:26.006 cpu : usr=2.01%, sys=13.95%, ctx=152231, majf=0, minf=797 00:13:26.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.006 issued rwts: total=0,152228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.006 00:13:26.006 Run status group 0 (all jobs): 00:13:26.006 WRITE: bw=59.5MiB/s (62.3MB/s), 59.5MiB/s-59.5MiB/s (62.3MB/s-62.3MB/s), io=595MiB (624MB), run=10001-10001msec 00:13:26.006 00:13:26.006 Disk stats (read/write): 00:13:26.006 ublkb0: ios=0/150676, merge=0/0, ticks=0/8195, in_queue=8196, util=99.10% 00:13:26.006 09:51:14 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:26.006 09:51:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.006 09:51:14 -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 [2024-12-15 09:51:14.949975] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.006 [2024-12-15 09:51:14.991779] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.006 [2024-12-15 09:51:14.992831] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.006 [2024-12-15 09:51:14.999279] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.006 [2024-12-15 09:51:14.999535] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:26.006 [2024-12-15 09:51:14.999549] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:26.006 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.006 09:51:15 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:26.006 09:51:15 -- common/autotest_common.sh@650 -- # local es=0 00:13:26.006 09:51:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:26.006 09:51:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:26.006 09:51:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.006 09:51:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:26.006 09:51:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.006 09:51:15 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:26.006 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.006 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 [2024-12-15 09:51:15.015345] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:26.269 request: 00:13:26.269 { 00:13:26.269 "ublk_id": 0, 00:13:26.269 "method": "ublk_stop_disk", 00:13:26.269 "req_id": 1 00:13:26.269 } 00:13:26.269 Got JSON-RPC error response 00:13:26.269 response: 00:13:26.269 { 00:13:26.269 "code": -19, 00:13:26.269 "message": "No such device" 00:13:26.269 } 00:13:26.269 09:51:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:26.269 09:51:15 -- common/autotest_common.sh@653 -- # es=1 00:13:26.269 09:51:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.269 09:51:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.269 09:51:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.269 09:51:15 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:26.269 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.269 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.269 [2024-12-15 09:51:15.031316] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:26.269 [2024-12-15 09:51:15.039270] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:26.269 [2024-12-15 09:51:15.039297] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:26.269 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.269 09:51:15 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:26.269 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.269 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.528 09:51:15 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:26.528 09:51:15 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:26.528 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.528 09:51:15 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:26.528 09:51:15 -- lvol/common.sh@26 -- # jq length 00:13:26.528 09:51:15 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:26.528 09:51:15 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:26.528 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.528 09:51:15 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:26.528 09:51:15 -- lvol/common.sh@28 -- # jq length 00:13:26.528 09:51:15 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:26.528 00:13:26.528 real 0m11.170s 00:13:26.528 user 0m0.492s 00:13:26.528 sys 0m1.477s 00:13:26.528 09:51:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 ************************************ 00:13:26.528 END TEST test_create_ublk 00:13:26.528 ************************************ 00:13:26.528 09:51:15 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:26.528 09:51:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:26.528 09:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 ************************************ 00:13:26.528 START TEST test_create_multi_ublk 00:13:26.528 ************************************ 00:13:26.528 09:51:15 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:26.528 09:51:15 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:26.528 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.528 [2024-12-15 09:51:15.531746] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:26.528 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.528 09:51:15 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:26.528 09:51:15 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:26.528 09:51:15 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.528 09:51:15 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:26.528 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.528 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.786 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.786 09:51:15 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:26.786 09:51:15 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:26.786 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.786 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:26.786 [2024-12-15 09:51:15.758372] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:26.786 [2024-12-15 09:51:15.758673] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:26.786 [2024-12-15 09:51:15.758685] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:26.786 [2024-12-15 09:51:15.758692] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.786 [2024-12-15 09:51:15.782279] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.786 [2024-12-15 09:51:15.782297] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.786 [2024-12-15 09:51:15.794279] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.786 [2024-12-15 09:51:15.794768] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:27.044 [2024-12-15 09:51:15.812338] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.044 09:51:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.044 09:51:15 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:27.044 09:51:15 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.044 09:51:15 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:27.044 09:51:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.044 09:51:15 -- common/autotest_common.sh@10 -- # set +x 00:13:27.044 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.044 09:51:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:27.044 09:51:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:27.044 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.044 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.044 [2024-12-15 09:51:16.038361] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:27.044 [2024-12-15 09:51:16.038653] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:27.044 [2024-12-15 09:51:16.038665] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:27.044 [2024-12-15 09:51:16.038670] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.044 [2024-12-15 09:51:16.046299] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.044 [2024-12-15 09:51:16.046314] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.044 [2024-12-15 09:51:16.054274] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.044 [2024-12-15 09:51:16.054769] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:27.302 [2024-12-15 09:51:16.063292] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.302 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.302 09:51:16 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:27.302 09:51:16 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.302 09:51:16 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:27.302 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.302 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.302 09:51:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:27.302 09:51:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:27.302 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.302 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.302 [2024-12-15 09:51:16.230366] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:27.302 [2024-12-15 09:51:16.230656] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:27.302 [2024-12-15 09:51:16.230667] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:27.302 [2024-12-15 09:51:16.230675] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.302 [2024-12-15 09:51:16.238286] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.302 [2024-12-15 09:51:16.238305] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.302 [2024-12-15 09:51:16.246274] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.302 [2024-12-15 09:51:16.246760] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:27.302 [2024-12-15 09:51:16.255303] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.302 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.302 09:51:16 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:27.302 09:51:16 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.302 09:51:16 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:27.302 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.302 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.559 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.559 09:51:16 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:27.559 09:51:16 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:27.559 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.559 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.559 [2024-12-15 09:51:16.422368] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:27.559 [2024-12-15 09:51:16.422657] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:27.559 [2024-12-15 09:51:16.422668] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:27.559 [2024-12-15 09:51:16.422674] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.559 [2024-12-15 09:51:16.431440] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.559 [2024-12-15 09:51:16.431454] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.560 [2024-12-15 09:51:16.438280] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.560 [2024-12-15 09:51:16.438758] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:27.560 [2024-12-15 09:51:16.441975] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.560 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.560 09:51:16 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:27.560 09:51:16 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:27.560 09:51:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.560 09:51:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.560 09:51:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.560 09:51:16 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:27.560 { 00:13:27.560 "ublk_device": "/dev/ublkb0", 00:13:27.560 "id": 0, 00:13:27.560 "queue_depth": 512, 00:13:27.560 "num_queues": 4, 00:13:27.560 "bdev_name": "Malloc0" 00:13:27.560 }, 00:13:27.560 { 00:13:27.560 "ublk_device": "/dev/ublkb1", 00:13:27.560 "id": 1, 00:13:27.560 "queue_depth": 512, 00:13:27.560 "num_queues": 4, 00:13:27.560 "bdev_name": "Malloc1" 00:13:27.560 }, 00:13:27.560 { 00:13:27.560 "ublk_device": "/dev/ublkb2", 00:13:27.560 "id": 2, 00:13:27.560 "queue_depth": 512, 00:13:27.560 "num_queues": 4, 00:13:27.560 "bdev_name": "Malloc2" 00:13:27.560 }, 00:13:27.560 { 00:13:27.560 "ublk_device": "/dev/ublkb3", 00:13:27.560 "id": 3, 00:13:27.560 "queue_depth": 512, 00:13:27.560 "num_queues": 4, 00:13:27.560 "bdev_name": "Malloc3" 00:13:27.560 } 00:13:27.560 ]' 00:13:27.560 09:51:16 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:27.560 09:51:16 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.560 09:51:16 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:27.560 09:51:16 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:27.560 09:51:16 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:27.560 09:51:16 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:27.560 09:51:16 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:27.560 09:51:16 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.560 09:51:16 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:27.818 09:51:16 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:27.818 09:51:16 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.818 09:51:16 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:27.818 09:51:16 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:27.818 09:51:16 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:27.818 09:51:16 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:27.818 09:51:16 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:27.818 09:51:16 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.818 09:51:16 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:27.818 09:51:16 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:27.818 09:51:16 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:28.075 09:51:16 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:28.075 09:51:16 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:28.075 09:51:16 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:28.075 09:51:16 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.075 09:51:16 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:28.075 09:51:16 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:28.075 09:51:16 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:28.075 09:51:16 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:28.075 09:51:17 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:28.075 09:51:17 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:28.075 09:51:17 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:28.075 09:51:17 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:28.334 09:51:17 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:28.334 09:51:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.334 09:51:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:28.334 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.334 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:28.334 [2024-12-15 09:51:17.098336] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.334 [2024-12-15 09:51:17.145786] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.334 [2024-12-15 09:51:17.147030] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.334 [2024-12-15 09:51:17.153289] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.334 [2024-12-15 09:51:17.153530] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:28.334 [2024-12-15 09:51:17.153543] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:28.334 09:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.334 09:51:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:28.334 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.334 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:28.334 [2024-12-15 09:51:17.169330] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.334 [2024-12-15 09:51:17.205281] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.334 [2024-12-15 09:51:17.206104] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.334 [2024-12-15 09:51:17.213282] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.334 [2024-12-15 09:51:17.213531] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:28.334 [2024-12-15 09:51:17.213544] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:28.334 09:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.334 09:51:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:28.334 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.334 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:28.334 [2024-12-15 09:51:17.229319] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.334 [2024-12-15 09:51:17.259778] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.334 [2024-12-15 09:51:17.260876] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.334 [2024-12-15 09:51:17.265293] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.334 [2024-12-15 09:51:17.265536] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:28.334 [2024-12-15 09:51:17.265550] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:28.334 09:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.334 09:51:17 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:28.334 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.334 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:28.334 [2024-12-15 09:51:17.281337] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:28.334 [2024-12-15 09:51:17.317313] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:28.334 [2024-12-15 09:51:17.317947] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:28.334 [2024-12-15 09:51:17.325282] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:28.334 [2024-12-15 09:51:17.325507] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:28.334 [2024-12-15 09:51:17.325515] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:28.334 09:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.334 09:51:17 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:28.592 [2024-12-15 09:51:17.509337] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:28.592 [2024-12-15 09:51:17.517271] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:28.592 [2024-12-15 09:51:17.517296] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:28.592 09:51:17 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:28.592 09:51:17 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:28.592 09:51:17 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:28.592 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.592 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:29.159 09:51:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.159 09:51:17 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.159 09:51:17 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:29.159 09:51:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.159 09:51:17 -- common/autotest_common.sh@10 -- # set +x 00:13:29.417 09:51:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.417 09:51:18 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.417 09:51:18 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:29.417 09:51:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.417 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 09:51:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.676 09:51:18 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:29.676 09:51:18 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:29.676 09:51:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.676 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 09:51:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.676 09:51:18 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:29.676 09:51:18 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:29.676 09:51:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.676 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 09:51:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.676 09:51:18 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:29.676 09:51:18 -- lvol/common.sh@26 -- # jq length 00:13:29.676 09:51:18 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:29.676 09:51:18 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:29.676 09:51:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.676 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.676 09:51:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.676 09:51:18 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:29.676 09:51:18 -- lvol/common.sh@28 -- # jq length 00:13:29.934 09:51:18 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:29.934 00:13:29.934 real 0m3.182s 00:13:29.934 user 0m0.804s 00:13:29.934 sys 0m0.141s 00:13:29.934 09:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.934 ************************************ 00:13:29.934 END TEST test_create_multi_ublk 00:13:29.934 ************************************ 00:13:29.934 09:51:18 -- common/autotest_common.sh@10 -- # set +x 00:13:29.934 09:51:18 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:29.934 09:51:18 -- ublk/ublk.sh@147 -- # cleanup 00:13:29.934 09:51:18 -- ublk/ublk.sh@130 -- # killprocess 69144 00:13:29.934 09:51:18 -- common/autotest_common.sh@936 -- # '[' -z 69144 ']' 00:13:29.934 09:51:18 -- common/autotest_common.sh@940 -- # kill -0 69144 00:13:29.934 09:51:18 -- common/autotest_common.sh@941 -- # uname 00:13:29.934 09:51:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.934 09:51:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69144 00:13:29.934 09:51:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:29.934 09:51:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:29.934 09:51:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69144' 00:13:29.934 killing process with pid 69144 00:13:29.934 09:51:18 -- common/autotest_common.sh@955 -- # kill 69144 00:13:29.934 09:51:18 -- common/autotest_common.sh@960 -- # wait 69144 00:13:30.501 [2024-12-15 09:51:19.274137] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:30.501 [2024-12-15 09:51:19.274185] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:31.068 00:13:31.068 real 0m24.968s 00:13:31.068 user 0m35.093s 00:13:31.068 sys 0m9.655s 00:13:31.068 09:51:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:31.068 09:51:19 -- common/autotest_common.sh@10 -- # set +x 00:13:31.068 ************************************ 00:13:31.068 END TEST ublk 00:13:31.068 ************************************ 00:13:31.068 09:51:19 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:31.068 09:51:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:31.068 09:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.068 09:51:19 -- common/autotest_common.sh@10 -- # set +x 00:13:31.068 ************************************ 00:13:31.068 START TEST ublk_recovery 00:13:31.068 ************************************ 00:13:31.068 09:51:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:31.068 * Looking for test storage... 00:13:31.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:31.068 09:51:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:31.068 09:51:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:31.068 09:51:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:31.330 09:51:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:31.330 09:51:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:31.330 09:51:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:31.330 09:51:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:31.330 09:51:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:31.330 09:51:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:31.330 09:51:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.330 09:51:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:31.330 09:51:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:31.330 09:51:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:31.330 09:51:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:31.330 09:51:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:31.330 09:51:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:31.330 09:51:20 -- scripts/common.sh@344 -- # : 1 00:13:31.330 09:51:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:31.330 09:51:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.330 09:51:20 -- scripts/common.sh@364 -- # decimal 1 00:13:31.330 09:51:20 -- scripts/common.sh@352 -- # local d=1 00:13:31.330 09:51:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.330 09:51:20 -- scripts/common.sh@354 -- # echo 1 00:13:31.330 09:51:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:31.330 09:51:20 -- scripts/common.sh@365 -- # decimal 2 00:13:31.330 09:51:20 -- scripts/common.sh@352 -- # local d=2 00:13:31.330 09:51:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.330 09:51:20 -- scripts/common.sh@354 -- # echo 2 00:13:31.330 09:51:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:31.330 09:51:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.330 09:51:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:31.330 09:51:20 -- scripts/common.sh@367 -- # return 0 00:13:31.330 09:51:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.330 09:51:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:31.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.330 --rc genhtml_branch_coverage=1 00:13:31.330 --rc genhtml_function_coverage=1 00:13:31.330 --rc genhtml_legend=1 00:13:31.330 --rc geninfo_all_blocks=1 00:13:31.330 --rc geninfo_unexecuted_blocks=1 00:13:31.330 00:13:31.330 ' 00:13:31.330 09:51:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:31.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.330 --rc genhtml_branch_coverage=1 00:13:31.330 --rc genhtml_function_coverage=1 00:13:31.330 --rc genhtml_legend=1 00:13:31.330 --rc geninfo_all_blocks=1 00:13:31.330 --rc geninfo_unexecuted_blocks=1 00:13:31.330 00:13:31.330 ' 00:13:31.330 09:51:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:31.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.330 --rc genhtml_branch_coverage=1 00:13:31.330 --rc genhtml_function_coverage=1 00:13:31.330 --rc genhtml_legend=1 00:13:31.330 --rc geninfo_all_blocks=1 00:13:31.330 --rc geninfo_unexecuted_blocks=1 00:13:31.330 00:13:31.330 ' 00:13:31.330 09:51:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:31.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.330 --rc genhtml_branch_coverage=1 00:13:31.330 --rc genhtml_function_coverage=1 00:13:31.330 --rc genhtml_legend=1 00:13:31.330 --rc geninfo_all_blocks=1 00:13:31.330 --rc geninfo_unexecuted_blocks=1 00:13:31.330 00:13:31.330 ' 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:31.330 09:51:20 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:31.330 09:51:20 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:31.330 09:51:20 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:31.330 09:51:20 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:31.330 09:51:20 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:31.330 09:51:20 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:31.330 09:51:20 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:31.330 09:51:20 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69535 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69535 00:13:31.330 09:51:20 -- common/autotest_common.sh@829 -- # '[' -z 69535 ']' 00:13:31.330 09:51:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.330 09:51:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.330 09:51:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.330 09:51:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.330 09:51:20 -- common/autotest_common.sh@10 -- # set +x 00:13:31.330 09:51:20 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:31.330 [2024-12-15 09:51:20.191266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:31.330 [2024-12-15 09:51:20.191374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69535 ] 00:13:31.330 [2024-12-15 09:51:20.335836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.592 [2024-12-15 09:51:20.515945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.592 [2024-12-15 09:51:20.516479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.592 [2024-12-15 09:51:20.516566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.977 09:51:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.977 09:51:21 -- common/autotest_common.sh@862 -- # return 0 00:13:32.977 09:51:21 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:32.977 09:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.977 09:51:21 -- common/autotest_common.sh@10 -- # set +x 00:13:32.977 [2024-12-15 09:51:21.647141] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:32.977 09:51:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.977 09:51:21 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:32.977 09:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.977 09:51:21 -- common/autotest_common.sh@10 -- # set +x 00:13:32.977 malloc0 00:13:32.977 09:51:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.977 09:51:21 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:32.977 09:51:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.977 09:51:21 -- common/autotest_common.sh@10 -- # set +x 00:13:32.977 [2024-12-15 09:51:21.747405] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:32.977 [2024-12-15 09:51:21.747498] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:32.977 [2024-12-15 09:51:21.747510] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:32.977 [2024-12-15 09:51:21.747519] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.977 [2024-12-15 09:51:21.763279] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.977 [2024-12-15 09:51:21.763302] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.977 [2024-12-15 09:51:21.774276] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.977 [2024-12-15 09:51:21.774413] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:32.977 [2024-12-15 09:51:21.786306] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.977 1 00:13:32.977 09:51:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.977 09:51:21 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:33.911 09:51:22 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69578 00:13:33.911 09:51:22 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:33.911 09:51:22 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:33.911 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.911 fio-3.35 00:13:33.911 Starting 1 process 00:13:39.176 09:51:27 -- ublk/ublk_recovery.sh@36 -- # kill -9 69535 00:13:39.176 09:51:27 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:44.523 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69535 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:44.523 09:51:32 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:44.523 09:51:32 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69694 00:13:44.523 09:51:32 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.523 09:51:32 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69694 00:13:44.523 09:51:32 -- common/autotest_common.sh@829 -- # '[' -z 69694 ']' 00:13:44.523 09:51:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.523 09:51:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.523 09:51:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.523 09:51:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.523 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:13:44.523 [2024-12-15 09:51:32.874179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.523 [2024-12-15 09:51:32.874303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69694 ] 00:13:44.523 [2024-12-15 09:51:33.023373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.523 [2024-12-15 09:51:33.238314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.523 [2024-12-15 09:51:33.238927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.523 [2024-12-15 09:51:33.239027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.461 09:51:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.461 09:51:34 -- common/autotest_common.sh@862 -- # return 0 00:13:45.461 09:51:34 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:45.461 09:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.461 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:45.461 [2024-12-15 09:51:34.383140] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:45.461 09:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.461 09:51:34 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:45.461 09:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.461 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:45.720 malloc0 00:13:45.720 09:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.720 09:51:34 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:45.720 09:51:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.720 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:13:45.720 [2024-12-15 09:51:34.483412] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:45.720 [2024-12-15 09:51:34.483452] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:45.720 [2024-12-15 09:51:34.483461] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:45.721 [2024-12-15 09:51:34.491311] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:45.721 [2024-12-15 09:51:34.491332] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:45.721 [2024-12-15 09:51:34.491402] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:45.721 1 00:13:45.721 09:51:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.721 09:51:34 -- ublk/ublk_recovery.sh@52 -- # wait 69578 00:14:12.255 [2024-12-15 09:51:58.274281] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:12.255 [2024-12-15 09:51:58.278470] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:12.255 [2024-12-15 09:51:58.284478] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:12.255 [2024-12-15 09:51:58.284500] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:34.172 00:14:34.172 fio_test: (groupid=0, jobs=1): err= 0: pid=69581: Sun Dec 15 09:52:23 2024 00:14:34.172 read: IOPS=15.0k, BW=58.6MiB/s (61.4MB/s)(3514MiB/60001msec) 00:14:34.172 slat (nsec): min=907, max=2834.4k, avg=4871.69, stdev=3306.21 00:14:34.172 clat (usec): min=705, max=30494k, avg=4090.69, stdev=248995.06 00:14:34.172 lat (usec): min=710, max=30494k, avg=4095.56, stdev=248995.06 00:14:34.172 clat percentiles (usec): 00:14:34.172 | 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:14:34.172 | 30.00th=[ 1909], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:14:34.172 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2024], 95.00th=[ 3032], 00:14:34.172 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8160], 00:14:34.172 | 99.99th=[13042] 00:14:34.172 bw ( KiB/s): min=28040, max=126824, per=100.00%, avg=120008.41, stdev=16696.25, samples=59 00:14:34.172 iops : min= 7010, max=31706, avg=30002.10, stdev=4174.06, samples=59 00:14:34.172 write: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(3509MiB/60001msec); 0 zone resets 00:14:34.172 slat (nsec): min=978, max=242370, avg=4908.45, stdev=1396.66 00:14:34.172 clat (usec): min=926, max=30494k, avg=4441.46, stdev=265269.56 00:14:34.172 lat (usec): min=932, max=30494k, avg=4446.37, stdev=265269.56 00:14:34.172 clat percentiles (usec): 00:14:34.172 | 1.00th=[ 1778], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:14:34.172 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:14:34.172 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2933], 00:14:34.172 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 8160], 00:14:34.172 | 99.99th=[13304] 00:14:34.172 bw ( KiB/s): min=27848, max=126368, per=100.00%, avg=119818.17, stdev=16713.99, samples=59 00:14:34.172 iops : min= 6962, max=31592, avg=29954.54, stdev=4178.50, samples=59 00:14:34.172 lat (usec) : 750=0.01%, 1000=0.01% 00:14:34.172 lat (msec) : 2=61.37%, 4=36.01%, 10=2.58%, 20=0.03%, >=2000=0.01% 00:14:34.172 cpu : usr=3.18%, sys=14.95%, ctx=59270, majf=0, minf=13 00:14:34.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:34.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.172 issued rwts: total=899688,898369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.172 00:14:34.172 Run status group 0 (all jobs): 00:14:34.172 READ: bw=58.6MiB/s (61.4MB/s), 58.6MiB/s-58.6MiB/s (61.4MB/s-61.4MB/s), io=3514MiB (3685MB), run=60001-60001msec 00:14:34.172 WRITE: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=3509MiB (3680MB), run=60001-60001msec 00:14:34.172 00:14:34.172 Disk stats (read/write): 00:14:34.172 ublkb1: ios=896304/894987, merge=0/0, ticks=3630393/3867109, in_queue=7497502, util=99.89% 00:14:34.172 09:52:23 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:34.172 09:52:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.172 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.172 [2024-12-15 09:52:23.051196] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:34.172 [2024-12-15 09:52:23.097372] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:34.172 [2024-12-15 09:52:23.097512] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:34.172 [2024-12-15 09:52:23.104284] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:34.172 [2024-12-15 09:52:23.104368] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:34.172 [2024-12-15 09:52:23.104376] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:34.172 09:52:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.172 09:52:23 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:34.172 09:52:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.172 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.172 [2024-12-15 09:52:23.120331] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:34.172 [2024-12-15 09:52:23.128271] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:34.172 [2024-12-15 09:52:23.128299] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:34.172 09:52:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.172 09:52:23 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:34.172 09:52:23 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:34.172 09:52:23 -- ublk/ublk_recovery.sh@14 -- # killprocess 69694 00:14:34.172 09:52:23 -- common/autotest_common.sh@936 -- # '[' -z 69694 ']' 00:14:34.172 09:52:23 -- common/autotest_common.sh@940 -- # kill -0 69694 00:14:34.172 09:52:23 -- common/autotest_common.sh@941 -- # uname 00:14:34.172 09:52:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.172 09:52:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69694 00:14:34.172 killing process with pid 69694 00:14:34.172 09:52:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.172 09:52:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.172 09:52:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69694' 00:14:34.173 09:52:23 -- common/autotest_common.sh@955 -- # kill 69694 00:14:34.173 09:52:23 -- common/autotest_common.sh@960 -- # wait 69694 00:14:35.546 [2024-12-15 09:52:24.243363] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:35.546 [2024-12-15 09:52:24.243410] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:36.480 00:14:36.480 real 1m5.410s 00:14:36.480 user 1m51.280s 00:14:36.480 sys 0m19.753s 00:14:36.480 09:52:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.480 ************************************ 00:14:36.480 END TEST ublk_recovery 00:14:36.480 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:14:36.480 ************************************ 00:14:36.480 09:52:25 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:36.480 09:52:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.480 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:14:36.480 09:52:25 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:36.480 09:52:25 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:36.480 09:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.480 09:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.480 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:14:36.480 ************************************ 00:14:36.480 START TEST ftl 00:14:36.480 ************************************ 00:14:36.480 09:52:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:36.740 * Looking for test storage... 00:14:36.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.740 09:52:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:36.740 09:52:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:36.740 09:52:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:36.740 09:52:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:36.740 09:52:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:36.740 09:52:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:36.740 09:52:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:36.740 09:52:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:36.740 09:52:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:36.740 09:52:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.740 09:52:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:36.740 09:52:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:36.740 09:52:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:36.740 09:52:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:36.740 09:52:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:36.740 09:52:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:36.740 09:52:25 -- scripts/common.sh@344 -- # : 1 00:14:36.740 09:52:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:36.740 09:52:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.740 09:52:25 -- scripts/common.sh@364 -- # decimal 1 00:14:36.740 09:52:25 -- scripts/common.sh@352 -- # local d=1 00:14:36.740 09:52:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.740 09:52:25 -- scripts/common.sh@354 -- # echo 1 00:14:36.740 09:52:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:36.740 09:52:25 -- scripts/common.sh@365 -- # decimal 2 00:14:36.740 09:52:25 -- scripts/common.sh@352 -- # local d=2 00:14:36.740 09:52:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.740 09:52:25 -- scripts/common.sh@354 -- # echo 2 00:14:36.740 09:52:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:36.740 09:52:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:36.740 09:52:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:36.740 09:52:25 -- scripts/common.sh@367 -- # return 0 00:14:36.741 09:52:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.741 09:52:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:36.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.741 --rc genhtml_branch_coverage=1 00:14:36.741 --rc genhtml_function_coverage=1 00:14:36.741 --rc genhtml_legend=1 00:14:36.741 --rc geninfo_all_blocks=1 00:14:36.741 --rc geninfo_unexecuted_blocks=1 00:14:36.741 00:14:36.741 ' 00:14:36.741 09:52:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:36.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.741 --rc genhtml_branch_coverage=1 00:14:36.741 --rc genhtml_function_coverage=1 00:14:36.741 --rc genhtml_legend=1 00:14:36.741 --rc geninfo_all_blocks=1 00:14:36.741 --rc geninfo_unexecuted_blocks=1 00:14:36.741 00:14:36.741 ' 00:14:36.741 09:52:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:36.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.741 --rc genhtml_branch_coverage=1 00:14:36.741 --rc genhtml_function_coverage=1 00:14:36.741 --rc genhtml_legend=1 00:14:36.741 --rc geninfo_all_blocks=1 00:14:36.741 --rc geninfo_unexecuted_blocks=1 00:14:36.741 00:14:36.741 ' 00:14:36.741 09:52:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:36.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.741 --rc genhtml_branch_coverage=1 00:14:36.741 --rc genhtml_function_coverage=1 00:14:36.741 --rc genhtml_legend=1 00:14:36.741 --rc geninfo_all_blocks=1 00:14:36.741 --rc geninfo_unexecuted_blocks=1 00:14:36.741 00:14:36.741 ' 00:14:36.741 09:52:25 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:36.741 09:52:25 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:36.741 09:52:25 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.741 09:52:25 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:36.741 09:52:25 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:36.741 09:52:25 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:36.741 09:52:25 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.741 09:52:25 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:36.741 09:52:25 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:36.741 09:52:25 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.741 09:52:25 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.741 09:52:25 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:36.741 09:52:25 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:36.741 09:52:25 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:36.741 09:52:25 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:36.741 09:52:25 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:36.741 09:52:25 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:36.741 09:52:25 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.741 09:52:25 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:36.741 09:52:25 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:36.741 09:52:25 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:36.741 09:52:25 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:36.741 09:52:25 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:36.741 09:52:25 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:36.741 09:52:25 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:36.741 09:52:25 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:36.741 09:52:25 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:36.741 09:52:25 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:36.741 09:52:25 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:36.741 09:52:25 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.741 09:52:25 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:36.741 09:52:25 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:36.741 09:52:25 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:36.741 09:52:25 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:36.741 09:52:25 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:37.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:37.309 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:37.309 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:37.309 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:37.309 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:37.309 09:52:26 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70509 00:14:37.309 09:52:26 -- ftl/ftl.sh@38 -- # waitforlisten 70509 00:14:37.309 09:52:26 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:37.309 09:52:26 -- common/autotest_common.sh@829 -- # '[' -z 70509 ']' 00:14:37.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.309 09:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.309 09:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.309 09:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.309 09:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.309 09:52:26 -- common/autotest_common.sh@10 -- # set +x 00:14:37.309 [2024-12-15 09:52:26.178194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:37.309 [2024-12-15 09:52:26.178301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:14:37.309 [2024-12-15 09:52:26.316708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.567 [2024-12-15 09:52:26.456455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.567 [2024-12-15 09:52:26.456604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.131 09:52:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.132 09:52:26 -- common/autotest_common.sh@862 -- # return 0 00:14:38.132 09:52:26 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:38.132 09:52:27 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:39.064 09:52:27 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:39.064 09:52:27 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:39.322 09:52:28 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:39.322 09:52:28 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:39.322 09:52:28 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:39.580 09:52:28 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:39.580 09:52:28 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:39.580 09:52:28 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:39.580 09:52:28 -- ftl/ftl.sh@50 -- # break 00:14:39.580 09:52:28 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:39.580 09:52:28 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:39.580 09:52:28 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:39.580 09:52:28 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:39.580 09:52:28 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:39.580 09:52:28 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:39.580 09:52:28 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:39.580 09:52:28 -- ftl/ftl.sh@63 -- # break 00:14:39.580 09:52:28 -- ftl/ftl.sh@66 -- # killprocess 70509 00:14:39.580 09:52:28 -- common/autotest_common.sh@936 -- # '[' -z 70509 ']' 00:14:39.580 09:52:28 -- common/autotest_common.sh@940 -- # kill -0 70509 00:14:39.580 09:52:28 -- common/autotest_common.sh@941 -- # uname 00:14:39.580 09:52:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.580 09:52:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70509 00:14:39.580 09:52:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.580 killing process with pid 70509 00:14:39.580 09:52:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.580 09:52:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70509' 00:14:39.580 09:52:28 -- common/autotest_common.sh@955 -- # kill 70509 00:14:39.580 09:52:28 -- common/autotest_common.sh@960 -- # wait 70509 00:14:40.954 09:52:29 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:40.954 09:52:29 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:40.954 09:52:29 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:40.954 09:52:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.954 09:52:29 -- common/autotest_common.sh@10 -- # set +x 00:14:40.954 ************************************ 00:14:40.954 START TEST ftl_fio_basic 00:14:40.954 ************************************ 00:14:40.954 09:52:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:40.954 * Looking for test storage... 00:14:40.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.954 09:52:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:40.954 09:52:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:40.954 09:52:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:40.954 09:52:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:40.954 09:52:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:40.954 09:52:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:40.954 09:52:29 -- scripts/common.sh@335 -- # IFS=.-: 00:14:40.954 09:52:29 -- scripts/common.sh@335 -- # read -ra ver1 00:14:40.954 09:52:29 -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.954 09:52:29 -- scripts/common.sh@336 -- # read -ra ver2 00:14:40.954 09:52:29 -- scripts/common.sh@337 -- # local 'op=<' 00:14:40.954 09:52:29 -- scripts/common.sh@339 -- # ver1_l=2 00:14:40.954 09:52:29 -- scripts/common.sh@340 -- # ver2_l=1 00:14:40.954 09:52:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:40.954 09:52:29 -- scripts/common.sh@343 -- # case "$op" in 00:14:40.954 09:52:29 -- scripts/common.sh@344 -- # : 1 00:14:40.954 09:52:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:40.954 09:52:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.954 09:52:29 -- scripts/common.sh@364 -- # decimal 1 00:14:40.954 09:52:29 -- scripts/common.sh@352 -- # local d=1 00:14:40.954 09:52:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.954 09:52:29 -- scripts/common.sh@354 -- # echo 1 00:14:40.954 09:52:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:40.954 09:52:29 -- scripts/common.sh@365 -- # decimal 2 00:14:40.954 09:52:29 -- scripts/common.sh@352 -- # local d=2 00:14:40.954 09:52:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.954 09:52:29 -- scripts/common.sh@354 -- # echo 2 00:14:40.954 09:52:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:40.954 09:52:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:40.954 09:52:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:40.954 09:52:29 -- scripts/common.sh@367 -- # return 0 00:14:40.954 09:52:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.954 --rc genhtml_branch_coverage=1 00:14:40.954 --rc genhtml_function_coverage=1 00:14:40.954 --rc genhtml_legend=1 00:14:40.954 --rc geninfo_all_blocks=1 00:14:40.954 --rc geninfo_unexecuted_blocks=1 00:14:40.954 00:14:40.954 ' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.954 --rc genhtml_branch_coverage=1 00:14:40.954 --rc genhtml_function_coverage=1 00:14:40.954 --rc genhtml_legend=1 00:14:40.954 --rc geninfo_all_blocks=1 00:14:40.954 --rc geninfo_unexecuted_blocks=1 00:14:40.954 00:14:40.954 ' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.954 --rc genhtml_branch_coverage=1 00:14:40.954 --rc genhtml_function_coverage=1 00:14:40.954 --rc genhtml_legend=1 00:14:40.954 --rc geninfo_all_blocks=1 00:14:40.954 --rc geninfo_unexecuted_blocks=1 00:14:40.954 00:14:40.954 ' 00:14:40.954 09:52:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:40.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.954 --rc genhtml_branch_coverage=1 00:14:40.954 --rc genhtml_function_coverage=1 00:14:40.954 --rc genhtml_legend=1 00:14:40.954 --rc geninfo_all_blocks=1 00:14:40.954 --rc geninfo_unexecuted_blocks=1 00:14:40.954 00:14:40.954 ' 00:14:40.954 09:52:29 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:40.954 09:52:29 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:40.954 09:52:29 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.954 09:52:29 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:40.954 09:52:29 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:40.954 09:52:29 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:40.954 09:52:29 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.954 09:52:29 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:40.954 09:52:29 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:40.954 09:52:29 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.954 09:52:29 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.954 09:52:29 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:40.954 09:52:29 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:40.954 09:52:29 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.954 09:52:29 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:40.954 09:52:29 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:40.954 09:52:29 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:40.954 09:52:29 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.954 09:52:29 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.954 09:52:29 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:40.954 09:52:29 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:40.954 09:52:29 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.954 09:52:29 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:40.954 09:52:29 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.954 09:52:29 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:40.954 09:52:29 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:40.954 09:52:29 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:40.954 09:52:29 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.954 09:52:29 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.954 09:52:29 -- ftl/fio.sh@11 -- # declare -A suite 00:14:40.954 09:52:29 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:40.954 09:52:29 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:40.954 09:52:29 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:40.954 09:52:29 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.954 09:52:29 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:40.954 09:52:29 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:40.954 09:52:29 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:40.954 09:52:29 -- ftl/fio.sh@26 -- # uuid= 00:14:40.954 09:52:29 -- ftl/fio.sh@27 -- # timeout=240 00:14:40.954 09:52:29 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:40.954 09:52:29 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:40.954 09:52:29 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:40.954 09:52:29 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:40.954 09:52:29 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:40.954 09:52:29 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:40.954 09:52:29 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:40.954 09:52:29 -- ftl/fio.sh@45 -- # svcpid=70635 00:14:40.954 09:52:29 -- ftl/fio.sh@46 -- # waitforlisten 70635 00:14:40.954 09:52:29 -- common/autotest_common.sh@829 -- # '[' -z 70635 ']' 00:14:40.954 09:52:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.954 09:52:29 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:40.954 09:52:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.954 09:52:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.954 09:52:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.954 09:52:29 -- common/autotest_common.sh@10 -- # set +x 00:14:41.212 [2024-12-15 09:52:29.972467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:41.212 [2024-12-15 09:52:29.972577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70635 ] 00:14:41.212 [2024-12-15 09:52:30.118285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.471 [2024-12-15 09:52:30.259922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.471 [2024-12-15 09:52:30.260350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.471 [2024-12-15 09:52:30.263290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.471 [2024-12-15 09:52:30.263494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.729 09:52:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.729 09:52:30 -- common/autotest_common.sh@862 -- # return 0 00:14:41.729 09:52:30 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:41.729 09:52:30 -- ftl/common.sh@54 -- # local name=nvme0 00:14:41.729 09:52:30 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:41.729 09:52:30 -- ftl/common.sh@56 -- # local size=103424 00:14:41.729 09:52:30 -- ftl/common.sh@59 -- # local base_bdev 00:14:41.729 09:52:30 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:41.987 09:52:30 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:41.987 09:52:30 -- ftl/common.sh@62 -- # local base_size 00:14:41.987 09:52:30 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:41.987 09:52:30 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:41.987 09:52:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:41.987 09:52:30 -- common/autotest_common.sh@1369 -- # local bs 00:14:41.987 09:52:30 -- common/autotest_common.sh@1370 -- # local nb 00:14:41.987 09:52:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:42.251 09:52:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:42.251 { 00:14:42.251 "name": "nvme0n1", 00:14:42.251 "aliases": [ 00:14:42.251 "7b4ea464-32a0-4106-8ec3-7dd11eaa5f61" 00:14:42.251 ], 00:14:42.251 "product_name": "NVMe disk", 00:14:42.251 "block_size": 4096, 00:14:42.251 "num_blocks": 1310720, 00:14:42.251 "uuid": "7b4ea464-32a0-4106-8ec3-7dd11eaa5f61", 00:14:42.251 "assigned_rate_limits": { 00:14:42.251 "rw_ios_per_sec": 0, 00:14:42.251 "rw_mbytes_per_sec": 0, 00:14:42.251 "r_mbytes_per_sec": 0, 00:14:42.251 "w_mbytes_per_sec": 0 00:14:42.251 }, 00:14:42.251 "claimed": false, 00:14:42.251 "zoned": false, 00:14:42.251 "supported_io_types": { 00:14:42.251 "read": true, 00:14:42.251 "write": true, 00:14:42.251 "unmap": true, 00:14:42.251 "write_zeroes": true, 00:14:42.251 "flush": true, 00:14:42.251 "reset": true, 00:14:42.251 "compare": true, 00:14:42.251 "compare_and_write": false, 00:14:42.251 "abort": true, 00:14:42.251 "nvme_admin": true, 00:14:42.251 "nvme_io": true 00:14:42.251 }, 00:14:42.251 "driver_specific": { 00:14:42.251 "nvme": [ 00:14:42.251 { 00:14:42.251 "pci_address": "0000:00:07.0", 00:14:42.251 "trid": { 00:14:42.251 "trtype": "PCIe", 00:14:42.251 "traddr": "0000:00:07.0" 00:14:42.251 }, 00:14:42.251 "ctrlr_data": { 00:14:42.251 "cntlid": 0, 00:14:42.251 "vendor_id": "0x1b36", 00:14:42.251 "model_number": "QEMU NVMe Ctrl", 00:14:42.251 "serial_number": "12341", 00:14:42.251 "firmware_revision": "8.0.0", 00:14:42.251 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:42.251 "oacs": { 00:14:42.251 "security": 0, 00:14:42.251 "format": 1, 00:14:42.251 "firmware": 0, 00:14:42.251 "ns_manage": 1 00:14:42.251 }, 00:14:42.251 "multi_ctrlr": false, 00:14:42.251 "ana_reporting": false 00:14:42.251 }, 00:14:42.251 "vs": { 00:14:42.251 "nvme_version": "1.4" 00:14:42.251 }, 00:14:42.251 "ns_data": { 00:14:42.251 "id": 1, 00:14:42.251 "can_share": false 00:14:42.251 } 00:14:42.251 } 00:14:42.251 ], 00:14:42.251 "mp_policy": "active_passive" 00:14:42.251 } 00:14:42.251 } 00:14:42.251 ]' 00:14:42.251 09:52:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:42.251 09:52:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:42.251 09:52:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:42.251 09:52:31 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:42.251 09:52:31 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:42.251 09:52:31 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:42.251 09:52:31 -- ftl/common.sh@63 -- # base_size=5120 00:14:42.251 09:52:31 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:42.251 09:52:31 -- ftl/common.sh@67 -- # clear_lvols 00:14:42.251 09:52:31 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:42.251 09:52:31 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:42.510 09:52:31 -- ftl/common.sh@28 -- # stores= 00:14:42.510 09:52:31 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:42.769 09:52:31 -- ftl/common.sh@68 -- # lvs=5c7447b5-e66a-4ddb-9d84-f467e82ea994 00:14:42.769 09:52:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5c7447b5-e66a-4ddb-9d84-f467e82ea994 00:14:43.027 09:52:31 -- ftl/fio.sh@48 -- # split_bdev=aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- ftl/common.sh@35 -- # local name=nvc0 00:14:43.027 09:52:31 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:43.027 09:52:31 -- ftl/common.sh@37 -- # local base_bdev=aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- ftl/common.sh@38 -- # local cache_size= 00:14:43.027 09:52:31 -- ftl/common.sh@41 -- # get_bdev_size aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- common/autotest_common.sh@1367 -- # local bdev_name=aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:43.027 09:52:31 -- common/autotest_common.sh@1369 -- # local bs 00:14:43.027 09:52:31 -- common/autotest_common.sh@1370 -- # local nb 00:14:43.027 09:52:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.027 09:52:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:43.027 { 00:14:43.027 "name": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:43.027 "aliases": [ 00:14:43.027 "lvs/nvme0n1p0" 00:14:43.027 ], 00:14:43.027 "product_name": "Logical Volume", 00:14:43.027 "block_size": 4096, 00:14:43.027 "num_blocks": 26476544, 00:14:43.027 "uuid": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:43.027 "assigned_rate_limits": { 00:14:43.027 "rw_ios_per_sec": 0, 00:14:43.027 "rw_mbytes_per_sec": 0, 00:14:43.027 "r_mbytes_per_sec": 0, 00:14:43.027 "w_mbytes_per_sec": 0 00:14:43.027 }, 00:14:43.027 "claimed": false, 00:14:43.027 "zoned": false, 00:14:43.027 "supported_io_types": { 00:14:43.027 "read": true, 00:14:43.027 "write": true, 00:14:43.027 "unmap": true, 00:14:43.027 "write_zeroes": true, 00:14:43.027 "flush": false, 00:14:43.027 "reset": true, 00:14:43.027 "compare": false, 00:14:43.027 "compare_and_write": false, 00:14:43.027 "abort": false, 00:14:43.027 "nvme_admin": false, 00:14:43.027 "nvme_io": false 00:14:43.027 }, 00:14:43.027 "driver_specific": { 00:14:43.027 "lvol": { 00:14:43.027 "lvol_store_uuid": "5c7447b5-e66a-4ddb-9d84-f467e82ea994", 00:14:43.027 "base_bdev": "nvme0n1", 00:14:43.027 "thin_provision": true, 00:14:43.027 "snapshot": false, 00:14:43.027 "clone": false, 00:14:43.027 "esnap_clone": false 00:14:43.027 } 00:14:43.027 } 00:14:43.027 } 00:14:43.027 ]' 00:14:43.027 09:52:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:43.027 09:52:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:43.027 09:52:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:43.027 09:52:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:43.027 09:52:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:43.027 09:52:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:43.027 09:52:32 -- ftl/common.sh@41 -- # local base_size=5171 00:14:43.027 09:52:32 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:43.027 09:52:32 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:43.292 09:52:32 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:43.292 09:52:32 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:43.292 09:52:32 -- ftl/common.sh@48 -- # get_bdev_size aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.292 09:52:32 -- common/autotest_common.sh@1367 -- # local bdev_name=aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.292 09:52:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:43.292 09:52:32 -- common/autotest_common.sh@1369 -- # local bs 00:14:43.292 09:52:32 -- common/autotest_common.sh@1370 -- # local nb 00:14:43.292 09:52:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.556 09:52:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:43.556 { 00:14:43.556 "name": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:43.556 "aliases": [ 00:14:43.556 "lvs/nvme0n1p0" 00:14:43.556 ], 00:14:43.556 "product_name": "Logical Volume", 00:14:43.556 "block_size": 4096, 00:14:43.556 "num_blocks": 26476544, 00:14:43.556 "uuid": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:43.556 "assigned_rate_limits": { 00:14:43.556 "rw_ios_per_sec": 0, 00:14:43.556 "rw_mbytes_per_sec": 0, 00:14:43.556 "r_mbytes_per_sec": 0, 00:14:43.556 "w_mbytes_per_sec": 0 00:14:43.556 }, 00:14:43.556 "claimed": false, 00:14:43.556 "zoned": false, 00:14:43.556 "supported_io_types": { 00:14:43.556 "read": true, 00:14:43.556 "write": true, 00:14:43.556 "unmap": true, 00:14:43.556 "write_zeroes": true, 00:14:43.556 "flush": false, 00:14:43.556 "reset": true, 00:14:43.556 "compare": false, 00:14:43.556 "compare_and_write": false, 00:14:43.556 "abort": false, 00:14:43.556 "nvme_admin": false, 00:14:43.556 "nvme_io": false 00:14:43.556 }, 00:14:43.556 "driver_specific": { 00:14:43.556 "lvol": { 00:14:43.556 "lvol_store_uuid": "5c7447b5-e66a-4ddb-9d84-f467e82ea994", 00:14:43.556 "base_bdev": "nvme0n1", 00:14:43.556 "thin_provision": true, 00:14:43.556 "snapshot": false, 00:14:43.556 "clone": false, 00:14:43.557 "esnap_clone": false 00:14:43.557 } 00:14:43.557 } 00:14:43.557 } 00:14:43.557 ]' 00:14:43.557 09:52:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:43.557 09:52:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:43.557 09:52:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:43.557 09:52:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:43.557 09:52:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:43.557 09:52:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:43.557 09:52:32 -- ftl/common.sh@48 -- # cache_size=5171 00:14:43.557 09:52:32 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:43.815 09:52:32 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:43.815 09:52:32 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:43.815 09:52:32 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:43.815 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:43.815 09:52:32 -- ftl/fio.sh@56 -- # get_bdev_size aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.815 09:52:32 -- common/autotest_common.sh@1367 -- # local bdev_name=aa5b0a44-1731-437d-9ed6-9137839df622 00:14:43.815 09:52:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:43.815 09:52:32 -- common/autotest_common.sh@1369 -- # local bs 00:14:43.815 09:52:32 -- common/autotest_common.sh@1370 -- # local nb 00:14:43.815 09:52:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa5b0a44-1731-437d-9ed6-9137839df622 00:14:44.074 09:52:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:44.074 { 00:14:44.074 "name": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:44.074 "aliases": [ 00:14:44.074 "lvs/nvme0n1p0" 00:14:44.074 ], 00:14:44.074 "product_name": "Logical Volume", 00:14:44.074 "block_size": 4096, 00:14:44.074 "num_blocks": 26476544, 00:14:44.074 "uuid": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:44.074 "assigned_rate_limits": { 00:14:44.074 "rw_ios_per_sec": 0, 00:14:44.074 "rw_mbytes_per_sec": 0, 00:14:44.074 "r_mbytes_per_sec": 0, 00:14:44.074 "w_mbytes_per_sec": 0 00:14:44.074 }, 00:14:44.074 "claimed": false, 00:14:44.074 "zoned": false, 00:14:44.074 "supported_io_types": { 00:14:44.074 "read": true, 00:14:44.074 "write": true, 00:14:44.074 "unmap": true, 00:14:44.074 "write_zeroes": true, 00:14:44.074 "flush": false, 00:14:44.074 "reset": true, 00:14:44.074 "compare": false, 00:14:44.074 "compare_and_write": false, 00:14:44.074 "abort": false, 00:14:44.074 "nvme_admin": false, 00:14:44.074 "nvme_io": false 00:14:44.074 }, 00:14:44.074 "driver_specific": { 00:14:44.074 "lvol": { 00:14:44.074 "lvol_store_uuid": "5c7447b5-e66a-4ddb-9d84-f467e82ea994", 00:14:44.074 "base_bdev": "nvme0n1", 00:14:44.074 "thin_provision": true, 00:14:44.074 "snapshot": false, 00:14:44.074 "clone": false, 00:14:44.074 "esnap_clone": false 00:14:44.074 } 00:14:44.074 } 00:14:44.074 } 00:14:44.074 ]' 00:14:44.074 09:52:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:44.074 09:52:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:44.074 09:52:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:44.074 09:52:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:44.074 09:52:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:44.074 09:52:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:44.074 09:52:32 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:44.074 09:52:32 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:44.074 09:52:32 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aa5b0a44-1731-437d-9ed6-9137839df622 -c nvc0n1p0 --l2p_dram_limit 60 00:14:44.334 [2024-12-15 09:52:33.115869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.115907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:44.334 [2024-12-15 09:52:33.115921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:44.334 [2024-12-15 09:52:33.115928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.115990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.115998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:44.334 [2024-12-15 09:52:33.116006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:14:44.334 [2024-12-15 09:52:33.116013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.116039] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:44.334 [2024-12-15 09:52:33.116709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:44.334 [2024-12-15 09:52:33.116730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.116736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:44.334 [2024-12-15 09:52:33.116744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:14:44.334 [2024-12-15 09:52:33.116750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.116782] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID afc5a546-0892-41b8-abc5-a86295e2d15b 00:14:44.334 [2024-12-15 09:52:33.117824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.117843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:44.334 [2024-12-15 09:52:33.117850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:14:44.334 [2024-12-15 09:52:33.117857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.123078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.123107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:44.334 [2024-12-15 09:52:33.123115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:14:44.334 [2024-12-15 09:52:33.123122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.123194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.123201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:44.334 [2024-12-15 09:52:33.123208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:44.334 [2024-12-15 09:52:33.123216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.123266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.123274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:44.334 [2024-12-15 09:52:33.123281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:14:44.334 [2024-12-15 09:52:33.123290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.123323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:44.334 [2024-12-15 09:52:33.126281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.126302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:44.334 [2024-12-15 09:52:33.126311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:14:44.334 [2024-12-15 09:52:33.126316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.126353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.126359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:44.334 [2024-12-15 09:52:33.126367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:44.334 [2024-12-15 09:52:33.126372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.334 [2024-12-15 09:52:33.126402] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:44.334 [2024-12-15 09:52:33.126491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:44.334 [2024-12-15 09:52:33.126503] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:44.334 [2024-12-15 09:52:33.126511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:44.334 [2024-12-15 09:52:33.126520] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:44.334 [2024-12-15 09:52:33.126527] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:44.334 [2024-12-15 09:52:33.126534] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:44.334 [2024-12-15 09:52:33.126539] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:44.334 [2024-12-15 09:52:33.126548] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:44.334 [2024-12-15 09:52:33.126554] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:44.334 [2024-12-15 09:52:33.126561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.334 [2024-12-15 09:52:33.126567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:44.334 [2024-12-15 09:52:33.126573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:14:44.335 [2024-12-15 09:52:33.126579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.126639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.126646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:44.335 [2024-12-15 09:52:33.126653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:44.335 [2024-12-15 09:52:33.126658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.126735] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:44.335 [2024-12-15 09:52:33.126742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:44.335 [2024-12-15 09:52:33.126750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:44.335 [2024-12-15 09:52:33.126767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:44.335 [2024-12-15 09:52:33.126786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:44.335 [2024-12-15 09:52:33.126798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:44.335 [2024-12-15 09:52:33.126803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:44.335 [2024-12-15 09:52:33.126810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:44.335 [2024-12-15 09:52:33.126816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:44.335 [2024-12-15 09:52:33.126822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:44.335 [2024-12-15 09:52:33.126831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:44.335 [2024-12-15 09:52:33.126843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:44.335 [2024-12-15 09:52:33.126849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:44.335 [2024-12-15 09:52:33.126861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:44.335 [2024-12-15 09:52:33.126866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:44.335 [2024-12-15 09:52:33.126877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:44.335 [2024-12-15 09:52:33.126894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:44.335 [2024-12-15 09:52:33.126910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:44.335 [2024-12-15 09:52:33.126928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:44.335 [2024-12-15 09:52:33.126951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:44.335 [2024-12-15 09:52:33.126956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:44.335 [2024-12-15 09:52:33.126963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:44.335 [2024-12-15 09:52:33.126969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:44.335 [2024-12-15 09:52:33.126975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:44.335 [2024-12-15 09:52:33.126980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:44.335 [2024-12-15 09:52:33.126986] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:44.335 [2024-12-15 09:52:33.126991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:44.335 [2024-12-15 09:52:33.126998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:44.335 [2024-12-15 09:52:33.127003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:44.335 [2024-12-15 09:52:33.127010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:44.335 [2024-12-15 09:52:33.127015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:44.335 [2024-12-15 09:52:33.127021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:44.335 [2024-12-15 09:52:33.127029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:44.335 [2024-12-15 09:52:33.127037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:44.335 [2024-12-15 09:52:33.127042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:44.335 [2024-12-15 09:52:33.127049] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:44.335 [2024-12-15 09:52:33.127056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:44.335 [2024-12-15 09:52:33.127066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:44.335 [2024-12-15 09:52:33.127071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:44.335 [2024-12-15 09:52:33.127078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:44.335 [2024-12-15 09:52:33.127083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:44.335 [2024-12-15 09:52:33.127090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:44.335 [2024-12-15 09:52:33.127095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:44.335 [2024-12-15 09:52:33.127101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:44.335 [2024-12-15 09:52:33.127106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:44.335 [2024-12-15 09:52:33.127113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:44.335 [2024-12-15 09:52:33.127118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:44.335 [2024-12-15 09:52:33.127125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:44.335 [2024-12-15 09:52:33.127131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:44.335 [2024-12-15 09:52:33.127139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:44.335 [2024-12-15 09:52:33.127144] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:44.335 [2024-12-15 09:52:33.127151] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:44.335 [2024-12-15 09:52:33.127159] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:44.335 [2024-12-15 09:52:33.127165] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:44.335 [2024-12-15 09:52:33.127171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:44.335 [2024-12-15 09:52:33.127177] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:44.335 [2024-12-15 09:52:33.127183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.127189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:44.335 [2024-12-15 09:52:33.127195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:14:44.335 [2024-12-15 09:52:33.127202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.139495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.139524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:44.335 [2024-12-15 09:52:33.139532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.204 ms 00:14:44.335 [2024-12-15 09:52:33.139539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.139615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.139624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:44.335 [2024-12-15 09:52:33.139629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:44.335 [2024-12-15 09:52:33.139636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.165170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.165195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:44.335 [2024-12-15 09:52:33.165204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.494 ms 00:14:44.335 [2024-12-15 09:52:33.165212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.165238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.165246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:44.335 [2024-12-15 09:52:33.165262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:44.335 [2024-12-15 09:52:33.165270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.335 [2024-12-15 09:52:33.165598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.335 [2024-12-15 09:52:33.165622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:44.336 [2024-12-15 09:52:33.165629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:14:44.336 [2024-12-15 09:52:33.165636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.336 [2024-12-15 09:52:33.165735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.336 [2024-12-15 09:52:33.165745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:44.336 [2024-12-15 09:52:33.165751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:14:44.336 [2024-12-15 09:52:33.165758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.336 [2024-12-15 09:52:33.192074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.336 [2024-12-15 09:52:33.192130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:44.336 [2024-12-15 09:52:33.192151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.292 ms 00:14:44.336 [2024-12-15 09:52:33.192168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.336 [2024-12-15 09:52:33.201645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:44.336 [2024-12-15 09:52:33.214325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.336 [2024-12-15 09:52:33.214349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:44.336 [2024-12-15 09:52:33.214358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.968 ms 00:14:44.336 [2024-12-15 09:52:33.214365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.336 [2024-12-15 09:52:33.262670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:44.336 [2024-12-15 09:52:33.262705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:44.336 [2024-12-15 09:52:33.262716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.273 ms 00:14:44.336 [2024-12-15 09:52:33.262723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:44.336 [2024-12-15 09:52:33.262763] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:44.336 [2024-12-15 09:52:33.262771] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:46.867 [2024-12-15 09:52:35.675078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.867 [2024-12-15 09:52:35.675124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:46.867 [2024-12-15 09:52:35.675140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2412.308 ms 00:14:46.867 [2024-12-15 09:52:35.675148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.867 [2024-12-15 09:52:35.675352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.867 [2024-12-15 09:52:35.675364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:46.867 [2024-12-15 09:52:35.675375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:14:46.867 [2024-12-15 09:52:35.675383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.867 [2024-12-15 09:52:35.698443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.867 [2024-12-15 09:52:35.698472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:46.867 [2024-12-15 09:52:35.698486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.000 ms 00:14:46.867 [2024-12-15 09:52:35.698494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.867 [2024-12-15 09:52:35.720592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.867 [2024-12-15 09:52:35.720618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:46.867 [2024-12-15 09:52:35.720634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.054 ms 00:14:46.867 [2024-12-15 09:52:35.720641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.867 [2024-12-15 09:52:35.720968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.867 [2024-12-15 09:52:35.720985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:46.867 [2024-12-15 09:52:35.720994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:14:46.868 [2024-12-15 09:52:35.721002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.779952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.779981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:46.868 [2024-12-15 09:52:35.779995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.909 ms 00:14:46.868 [2024-12-15 09:52:35.780004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.804347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.804374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:46.868 [2024-12-15 09:52:35.804388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.302 ms 00:14:46.868 [2024-12-15 09:52:35.804395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.808210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.808236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:46.868 [2024-12-15 09:52:35.808249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:14:46.868 [2024-12-15 09:52:35.808266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.831539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.831565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:46.868 [2024-12-15 09:52:35.831576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.228 ms 00:14:46.868 [2024-12-15 09:52:35.831583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.831642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.831651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:46.868 [2024-12-15 09:52:35.831661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:46.868 [2024-12-15 09:52:35.831668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.831760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.868 [2024-12-15 09:52:35.831769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:46.868 [2024-12-15 09:52:35.831780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:14:46.868 [2024-12-15 09:52:35.831792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.868 [2024-12-15 09:52:35.832726] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2716.448 ms, result 0 00:14:46.868 { 00:14:46.868 "name": "ftl0", 00:14:46.868 "uuid": "afc5a546-0892-41b8-abc5-a86295e2d15b" 00:14:46.868 } 00:14:46.868 09:52:35 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:46.868 09:52:35 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:46.868 09:52:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.868 09:52:35 -- common/autotest_common.sh@899 -- # local i 00:14:46.868 09:52:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.868 09:52:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.868 09:52:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:47.126 09:52:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:47.384 [ 00:14:47.384 { 00:14:47.384 "name": "ftl0", 00:14:47.384 "aliases": [ 00:14:47.384 "afc5a546-0892-41b8-abc5-a86295e2d15b" 00:14:47.384 ], 00:14:47.384 "product_name": "FTL disk", 00:14:47.384 "block_size": 4096, 00:14:47.384 "num_blocks": 20971520, 00:14:47.384 "uuid": "afc5a546-0892-41b8-abc5-a86295e2d15b", 00:14:47.384 "assigned_rate_limits": { 00:14:47.384 "rw_ios_per_sec": 0, 00:14:47.384 "rw_mbytes_per_sec": 0, 00:14:47.384 "r_mbytes_per_sec": 0, 00:14:47.384 "w_mbytes_per_sec": 0 00:14:47.384 }, 00:14:47.384 "claimed": false, 00:14:47.384 "zoned": false, 00:14:47.384 "supported_io_types": { 00:14:47.384 "read": true, 00:14:47.384 "write": true, 00:14:47.384 "unmap": true, 00:14:47.384 "write_zeroes": true, 00:14:47.384 "flush": true, 00:14:47.384 "reset": false, 00:14:47.384 "compare": false, 00:14:47.384 "compare_and_write": false, 00:14:47.384 "abort": false, 00:14:47.384 "nvme_admin": false, 00:14:47.384 "nvme_io": false 00:14:47.384 }, 00:14:47.384 "driver_specific": { 00:14:47.384 "ftl": { 00:14:47.384 "base_bdev": "aa5b0a44-1731-437d-9ed6-9137839df622", 00:14:47.384 "cache": "nvc0n1p0" 00:14:47.384 } 00:14:47.384 } 00:14:47.384 } 00:14:47.384 ] 00:14:47.384 09:52:36 -- common/autotest_common.sh@905 -- # return 0 00:14:47.384 09:52:36 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:47.384 09:52:36 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:47.384 09:52:36 -- ftl/fio.sh@70 -- # echo ']}' 00:14:47.384 09:52:36 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:47.642 [2024-12-15 09:52:36.577691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.642 [2024-12-15 09:52:36.577735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:47.642 [2024-12-15 09:52:36.577747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:47.642 [2024-12-15 09:52:36.577757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.642 [2024-12-15 09:52:36.577790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:47.642 [2024-12-15 09:52:36.580239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.642 [2024-12-15 09:52:36.580286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:47.642 [2024-12-15 09:52:36.580301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:14:47.642 [2024-12-15 09:52:36.580309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.642 [2024-12-15 09:52:36.580790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.642 [2024-12-15 09:52:36.580801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:47.642 [2024-12-15 09:52:36.580811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:14:47.642 [2024-12-15 09:52:36.580818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.642 [2024-12-15 09:52:36.584067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.642 [2024-12-15 09:52:36.584085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:47.642 [2024-12-15 09:52:36.584096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:14:47.642 [2024-12-15 09:52:36.584104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.642 [2024-12-15 09:52:36.590372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.642 [2024-12-15 09:52:36.590393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:47.642 [2024-12-15 09:52:36.590403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:14:47.642 [2024-12-15 09:52:36.590410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.643 [2024-12-15 09:52:36.614349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.643 [2024-12-15 09:52:36.614377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:47.643 [2024-12-15 09:52:36.614389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.838 ms 00:14:47.643 [2024-12-15 09:52:36.614396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.643 [2024-12-15 09:52:36.628839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.643 [2024-12-15 09:52:36.628866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:47.643 [2024-12-15 09:52:36.628891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.396 ms 00:14:47.643 [2024-12-15 09:52:36.628898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.643 [2024-12-15 09:52:36.629095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.643 [2024-12-15 09:52:36.629105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:47.643 [2024-12-15 09:52:36.629118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:14:47.643 [2024-12-15 09:52:36.629125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.643 [2024-12-15 09:52:36.651791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.643 [2024-12-15 09:52:36.651817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:47.643 [2024-12-15 09:52:36.651828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.635 ms 00:14:47.643 [2024-12-15 09:52:36.651835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.902 [2024-12-15 09:52:36.674639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.902 [2024-12-15 09:52:36.674664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:47.902 [2024-12-15 09:52:36.674675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.757 ms 00:14:47.902 [2024-12-15 09:52:36.674682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.902 [2024-12-15 09:52:36.697226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.902 [2024-12-15 09:52:36.697251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:47.902 [2024-12-15 09:52:36.697270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.498 ms 00:14:47.902 [2024-12-15 09:52:36.697276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.902 [2024-12-15 09:52:36.719917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.902 [2024-12-15 09:52:36.719949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:47.902 [2024-12-15 09:52:36.719960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.546 ms 00:14:47.902 [2024-12-15 09:52:36.719967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.902 [2024-12-15 09:52:36.720017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:47.902 [2024-12-15 09:52:36.720031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:47.902 [2024-12-15 09:52:36.720303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:47.903 [2024-12-15 09:52:36.720881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:47.903 [2024-12-15 09:52:36.720890] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: afc5a546-0892-41b8-abc5-a86295e2d15b 00:14:47.903 [2024-12-15 09:52:36.720900] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:47.903 [2024-12-15 09:52:36.720908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:47.903 [2024-12-15 09:52:36.720915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:47.903 [2024-12-15 09:52:36.720923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:47.903 [2024-12-15 09:52:36.720930] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:47.903 [2024-12-15 09:52:36.720938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:47.903 [2024-12-15 09:52:36.720945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:47.903 [2024-12-15 09:52:36.720953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:47.903 [2024-12-15 09:52:36.720959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:47.903 [2024-12-15 09:52:36.720969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.903 [2024-12-15 09:52:36.720978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:47.903 [2024-12-15 09:52:36.720987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:14:47.903 [2024-12-15 09:52:36.720993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.903 [2024-12-15 09:52:36.733624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.903 [2024-12-15 09:52:36.733647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:47.903 [2024-12-15 09:52:36.733659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:14:47.903 [2024-12-15 09:52:36.733666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.903 [2024-12-15 09:52:36.733862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:47.903 [2024-12-15 09:52:36.733870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:47.903 [2024-12-15 09:52:36.733879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:14:47.903 [2024-12-15 09:52:36.733886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.903 [2024-12-15 09:52:36.778139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.903 [2024-12-15 09:52:36.778166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:47.903 [2024-12-15 09:52:36.778178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.903 [2024-12-15 09:52:36.778186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.903 [2024-12-15 09:52:36.778247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.903 [2024-12-15 09:52:36.778264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:47.903 [2024-12-15 09:52:36.778273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.903 [2024-12-15 09:52:36.778280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.778355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.778365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:47.904 [2024-12-15 09:52:36.778374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.778381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.778409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.778418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:47.904 [2024-12-15 09:52:36.778427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.778434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.863813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.863849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:47.904 [2024-12-15 09:52:36.863861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.863869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.892812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.892840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:47.904 [2024-12-15 09:52:36.892852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.892859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.892927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.892936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:47.904 [2024-12-15 09:52:36.892946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.892953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.893030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:47.904 [2024-12-15 09:52:36.893041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.893053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.893158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:47.904 [2024-12-15 09:52:36.893167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.893185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.893248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:47.904 [2024-12-15 09:52:36.893269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.893278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.893331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:47.904 [2024-12-15 09:52:36.893340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.893346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:47.904 [2024-12-15 09:52:36.893410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:47.904 [2024-12-15 09:52:36.893421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:47.904 [2024-12-15 09:52:36.893428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:47.904 [2024-12-15 09:52:36.893590] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 315.866 ms, result 0 00:14:47.904 true 00:14:48.162 09:52:36 -- ftl/fio.sh@75 -- # killprocess 70635 00:14:48.162 09:52:36 -- common/autotest_common.sh@936 -- # '[' -z 70635 ']' 00:14:48.162 09:52:36 -- common/autotest_common.sh@940 -- # kill -0 70635 00:14:48.162 09:52:36 -- common/autotest_common.sh@941 -- # uname 00:14:48.162 09:52:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.162 09:52:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70635 00:14:48.162 09:52:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.162 09:52:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.162 killing process with pid 70635 00:14:48.162 09:52:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70635' 00:14:48.162 09:52:36 -- common/autotest_common.sh@955 -- # kill 70635 00:14:48.162 09:52:36 -- common/autotest_common.sh@960 -- # wait 70635 00:14:54.746 09:52:42 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:54.746 09:52:42 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:54.746 09:52:42 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:54.746 09:52:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.746 09:52:42 -- common/autotest_common.sh@10 -- # set +x 00:14:54.746 09:52:42 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.746 09:52:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.746 09:52:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:54.746 09:52:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.746 09:52:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:54.746 09:52:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.746 09:52:42 -- common/autotest_common.sh@1330 -- # shift 00:14:54.746 09:52:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:54.746 09:52:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.746 09:52:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.746 09:52:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:54.746 09:52:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:54.746 09:52:42 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:54.746 09:52:42 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:54.746 09:52:42 -- common/autotest_common.sh@1336 -- # break 00:14:54.746 09:52:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:54.746 09:52:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:54.746 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:54.746 fio-3.35 00:14:54.746 Starting 1 thread 00:14:58.945 00:14:58.945 test: (groupid=0, jobs=1): err= 0: pid=70858: Sun Dec 15 09:52:47 2024 00:14:58.945 read: IOPS=1116, BW=74.1MiB/s (77.7MB/s)(255MiB/3433msec) 00:14:58.945 slat (nsec): min=2960, max=81306, avg=4307.71, stdev=2290.48 00:14:58.945 clat (usec): min=258, max=1744, avg=402.00, stdev=140.74 00:14:58.945 lat (usec): min=262, max=1755, avg=406.31, stdev=141.56 00:14:58.945 clat percentiles (usec): 00:14:58.945 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 310], 00:14:58.945 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 388], 00:14:58.945 | 70.00th=[ 474], 80.00th=[ 510], 90.00th=[ 529], 95.00th=[ 635], 00:14:58.945 | 99.00th=[ 906], 99.50th=[ 979], 99.90th=[ 1188], 99.95th=[ 1434], 00:14:58.945 | 99.99th=[ 1745] 00:14:58.945 write: IOPS=1124, BW=74.7MiB/s (78.3MB/s)(256MiB/3430msec); 0 zone resets 00:14:58.946 slat (nsec): min=13290, max=79870, avg=18038.52, stdev=3501.71 00:14:58.946 clat (usec): min=283, max=6112, avg=455.64, stdev=202.11 00:14:58.946 lat (usec): min=299, max=6130, avg=473.68, stdev=203.60 00:14:58.946 clat percentiles (usec): 00:14:58.946 | 1.00th=[ 302], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:14:58.946 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 400], 00:14:58.946 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 627], 95.00th=[ 898], 00:14:58.946 | 99.00th=[ 1045], 99.50th=[ 1237], 99.90th=[ 1663], 99.95th=[ 1827], 00:14:58.946 | 99.99th=[ 6128] 00:14:58.946 bw ( KiB/s): min=52360, max=100232, per=96.34%, avg=73644.00, stdev=20960.38, samples=6 00:14:58.946 iops : min= 770, max= 1474, avg=1083.00, stdev=308.24, samples=6 00:14:58.946 lat (usec) : 500=69.58%, 750=24.83%, 1000=4.73% 00:14:58.946 lat (msec) : 2=0.85%, 10=0.01% 00:14:58.946 cpu : usr=99.39%, sys=0.09%, ctx=7, majf=0, minf=1318 00:14:58.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.946 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.946 00:14:58.946 Run status group 0 (all jobs): 00:14:58.946 READ: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=255MiB (267MB), run=3433-3433msec 00:14:58.946 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=256MiB (269MB), run=3430-3430msec 00:14:59.884 ----------------------------------------------------- 00:14:59.884 Suppressions used: 00:14:59.884 count bytes template 00:14:59.884 1 5 /usr/src/fio/parse.c 00:14:59.884 1 8 libtcmalloc_minimal.so 00:14:59.884 1 904 libcrypto.so 00:14:59.884 ----------------------------------------------------- 00:14:59.884 00:14:59.884 09:52:48 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:59.884 09:52:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.884 09:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:59.884 09:52:48 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:59.884 09:52:48 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:59.884 09:52:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.884 09:52:48 -- common/autotest_common.sh@10 -- # set +x 00:14:59.884 09:52:48 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.884 09:52:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.884 09:52:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:59.884 09:52:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.884 09:52:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:59.884 09:52:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.884 09:52:48 -- common/autotest_common.sh@1330 -- # shift 00:14:59.884 09:52:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:59.884 09:52:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.884 09:52:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.884 09:52:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:59.884 09:52:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:59.884 09:52:48 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.884 09:52:48 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.884 09:52:48 -- common/autotest_common.sh@1336 -- # break 00:14:59.884 09:52:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.884 09:52:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.884 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:59.884 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:59.884 fio-3.35 00:14:59.884 Starting 2 threads 00:15:26.439 00:15:26.439 first_half: (groupid=0, jobs=1): err= 0: pid=70945: Sun Dec 15 09:53:14 2024 00:15:26.439 read: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(255MiB/25005msec) 00:15:26.439 slat (nsec): min=3027, max=97904, avg=5017.25, stdev=1293.35 00:15:26.439 clat (usec): min=607, max=380328, avg=39468.28, stdev=27902.02 00:15:26.439 lat (usec): min=611, max=380333, avg=39473.30, stdev=27902.20 00:15:26.439 clat percentiles (msec): 00:15:26.439 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:15:26.439 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 33], 60.00th=[ 35], 00:15:26.439 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 52], 95.00th=[ 73], 00:15:26.439 | 99.00th=[ 188], 99.50th=[ 247], 99.90th=[ 313], 99.95th=[ 330], 00:15:26.439 | 99.99th=[ 372] 00:15:26.439 write: IOPS=3085, BW=12.1MiB/s (12.6MB/s)(256MiB/21243msec); 0 zone resets 00:15:26.439 slat (usec): min=3, max=4277, avg= 6.25, stdev=18.84 00:15:26.439 clat (usec): min=375, max=75261, avg=9485.76, stdev=13143.14 00:15:26.439 lat (usec): min=379, max=75266, avg=9492.01, stdev=13143.12 00:15:26.439 clat percentiles (usec): 00:15:26.439 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 881], 20.00th=[ 1696], 00:15:26.439 | 30.00th=[ 2999], 40.00th=[ 4883], 50.00th=[ 6128], 60.00th=[ 8291], 00:15:26.439 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[13698], 95.00th=[51119], 00:15:26.439 | 99.00th=[61604], 99.50th=[64226], 99.90th=[70779], 99.95th=[71828], 00:15:26.439 | 99.99th=[73925] 00:15:26.439 bw ( KiB/s): min= 360, max=45424, per=92.32%, avg=20968.16, stdev=15150.12, samples=25 00:15:26.439 iops : min= 90, max=11356, avg=5242.04, stdev=3787.53, samples=25 00:15:26.439 lat (usec) : 500=0.05%, 750=2.49%, 1000=4.14% 00:15:26.439 lat (msec) : 2=4.50%, 4=6.82%, 10=19.43%, 20=9.61%, 50=44.76% 00:15:26.439 lat (msec) : 100=6.70%, 250=1.26%, 500=0.24% 00:15:26.439 cpu : usr=99.44%, sys=0.13%, ctx=38, majf=0, minf=5569 00:15:26.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:26.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.439 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.439 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.439 second_half: (groupid=0, jobs=1): err= 0: pid=70946: Sun Dec 15 09:53:14 2024 00:15:26.439 read: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(255MiB/25154msec) 00:15:26.439 slat (nsec): min=2997, max=49276, avg=4782.16, stdev=1324.91 00:15:26.439 clat (usec): min=597, max=389521, avg=38974.73, stdev=30843.12 00:15:26.439 lat (usec): min=602, max=389541, avg=38979.52, stdev=30843.40 00:15:26.439 clat percentiles (msec): 00:15:26.439 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:15:26.439 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 35], 00:15:26.439 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 51], 95.00th=[ 67], 00:15:26.439 | 99.00th=[ 207], 99.50th=[ 266], 99.90th=[ 305], 99.95th=[ 326], 00:15:26.439 | 99.99th=[ 388] 00:15:26.439 write: IOPS=2839, BW=11.1MiB/s (11.6MB/s)(256MiB/23084msec); 0 zone resets 00:15:26.439 slat (usec): min=3, max=675, avg= 6.50, stdev= 5.38 00:15:26.439 clat (usec): min=352, max=75387, avg=10268.52, stdev=14836.63 00:15:26.439 lat (usec): min=358, max=75393, avg=10275.02, stdev=14836.73 00:15:26.439 clat percentiles (usec): 00:15:26.439 | 1.00th=[ 644], 5.00th=[ 758], 10.00th=[ 857], 20.00th=[ 1549], 00:15:26.439 | 30.00th=[ 2311], 40.00th=[ 3687], 50.00th=[ 5276], 60.00th=[ 7111], 00:15:26.439 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[28967], 95.00th=[52691], 00:15:26.439 | 99.00th=[63177], 99.50th=[66323], 99.90th=[72877], 99.95th=[73925], 00:15:26.439 | 99.99th=[74974] 00:15:26.439 bw ( KiB/s): min= 512, max=62408, per=100.00%, avg=22795.13, stdev=16904.16, samples=23 00:15:26.439 iops : min= 128, max=15602, avg=5698.78, stdev=4226.04, samples=23 00:15:26.439 lat (usec) : 500=0.01%, 750=2.39%, 1000=4.71% 00:15:26.439 lat (msec) : 2=5.69%, 4=8.25%, 10=18.28%, 20=6.66%, 50=45.76% 00:15:26.439 lat (msec) : 100=6.64%, 250=1.31%, 500=0.32% 00:15:26.439 cpu : usr=99.45%, sys=0.11%, ctx=75, majf=0, minf=5540 00:15:26.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:26.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.439 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.439 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.439 00:15:26.439 Run status group 0 (all jobs): 00:15:26.439 READ: bw=20.3MiB/s (21.3MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=510MiB (535MB), run=25005-25154msec 00:15:26.439 WRITE: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-12.1MiB/s (11.6MB/s-12.6MB/s), io=512MiB (537MB), run=21243-23084msec 00:15:28.350 ----------------------------------------------------- 00:15:28.350 Suppressions used: 00:15:28.350 count bytes template 00:15:28.350 2 10 /usr/src/fio/parse.c 00:15:28.350 3 288 /usr/src/fio/iolog.c 00:15:28.350 1 8 libtcmalloc_minimal.so 00:15:28.350 1 904 libcrypto.so 00:15:28.350 ----------------------------------------------------- 00:15:28.350 00:15:28.611 09:53:17 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:28.611 09:53:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.612 09:53:17 -- common/autotest_common.sh@10 -- # set +x 00:15:28.612 09:53:17 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:28.612 09:53:17 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:28.612 09:53:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.612 09:53:17 -- common/autotest_common.sh@10 -- # set +x 00:15:28.612 09:53:17 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:28.612 09:53:17 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:28.612 09:53:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:28.612 09:53:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.612 09:53:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:28.612 09:53:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.612 09:53:17 -- common/autotest_common.sh@1330 -- # shift 00:15:28.612 09:53:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:28.612 09:53:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.612 09:53:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:28.612 09:53:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.612 09:53:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:28.612 09:53:17 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.612 09:53:17 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.612 09:53:17 -- common/autotest_common.sh@1336 -- # break 00:15:28.612 09:53:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.612 09:53:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:28.612 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:28.612 fio-3.35 00:15:28.612 Starting 1 thread 00:15:43.539 00:15:43.540 test: (groupid=0, jobs=1): err= 0: pid=71281: Sun Dec 15 09:53:31 2024 00:15:43.540 read: IOPS=8385, BW=32.8MiB/s (34.3MB/s)(255MiB/7776msec) 00:15:43.540 slat (nsec): min=3020, max=88417, avg=4713.54, stdev=1103.78 00:15:43.540 clat (usec): min=503, max=34878, avg=15256.79, stdev=2251.90 00:15:43.540 lat (usec): min=506, max=34883, avg=15261.50, stdev=2251.96 00:15:43.540 clat percentiles (usec): 00:15:43.540 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13829], 20.00th=[13960], 00:15:43.540 | 30.00th=[14222], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:15:43.540 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[20841], 00:15:43.540 | 99.00th=[24773], 99.50th=[26608], 99.90th=[29754], 99.95th=[31851], 00:15:43.540 | 99.99th=[34341] 00:15:43.540 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(256MiB/5305msec); 0 zone resets 00:15:43.540 slat (usec): min=4, max=2135, avg= 6.91, stdev= 9.09 00:15:43.540 clat (usec): min=447, max=39701, avg=10318.69, stdev=10243.00 00:15:43.540 lat (usec): min=452, max=39708, avg=10325.60, stdev=10243.17 00:15:43.540 clat percentiles (usec): 00:15:43.540 | 1.00th=[ 603], 5.00th=[ 668], 10.00th=[ 709], 20.00th=[ 824], 00:15:43.540 | 30.00th=[ 1057], 40.00th=[ 1434], 50.00th=[ 7570], 60.00th=[11994], 00:15:43.540 | 70.00th=[15533], 80.00th=[18220], 90.00th=[27395], 95.00th=[30278], 00:15:43.540 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[38011], 00:15:43.540 | 99.99th=[39584] 00:15:43.540 bw ( KiB/s): min=30163, max=71896, per=96.44%, avg=47657.00, stdev=14146.31, samples=11 00:15:43.540 iops : min= 7540, max=17974, avg=11914.18, stdev=3536.67, samples=11 00:15:43.540 lat (usec) : 500=0.01%, 750=7.13%, 1000=6.64% 00:15:43.540 lat (msec) : 2=6.86%, 4=0.54%, 10=6.32%, 20=61.13%, 50=11.38% 00:15:43.540 cpu : usr=99.10%, sys=0.25%, ctx=27, majf=0, minf=5567 00:15:43.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:43.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.540 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.540 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.540 00:15:43.540 Run status group 0 (all jobs): 00:15:43.540 READ: bw=32.8MiB/s (34.3MB/s), 32.8MiB/s-32.8MiB/s (34.3MB/s-34.3MB/s), io=255MiB (267MB), run=7776-7776msec 00:15:43.540 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=256MiB (268MB), run=5305-5305msec 00:15:44.485 ----------------------------------------------------- 00:15:44.485 Suppressions used: 00:15:44.485 count bytes template 00:15:44.485 1 5 /usr/src/fio/parse.c 00:15:44.485 2 192 /usr/src/fio/iolog.c 00:15:44.485 1 8 libtcmalloc_minimal.so 00:15:44.485 1 904 libcrypto.so 00:15:44.485 ----------------------------------------------------- 00:15:44.485 00:15:44.485 09:53:33 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:44.485 09:53:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.485 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 09:53:33 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:44.748 Remove shared memory files 00:15:44.748 09:53:33 -- ftl/fio.sh@85 -- # remove_shm 00:15:44.748 09:53:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:44.748 09:53:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:44.748 09:53:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:44.748 09:53:33 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56180 /dev/shm/spdk_tgt_trace.pid69535 00:15:44.748 09:53:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:44.748 09:53:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:44.748 00:15:44.748 real 1m3.799s 00:15:44.748 user 2m11.677s 00:15:44.748 sys 0m7.775s 00:15:44.748 09:53:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:44.748 ************************************ 00:15:44.748 END TEST ftl_fio_basic 00:15:44.748 ************************************ 00:15:44.748 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 09:53:33 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:44.748 09:53:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.748 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 ************************************ 00:15:44.748 START TEST ftl_bdevperf 00:15:44.748 ************************************ 00:15:44.748 09:53:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:44.748 * Looking for test storage... 00:15:44.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.748 09:53:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:44.748 09:53:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:44.748 09:53:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:44.748 09:53:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:44.748 09:53:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:44.748 09:53:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:44.748 09:53:33 -- scripts/common.sh@335 -- # IFS=.-: 00:15:44.748 09:53:33 -- scripts/common.sh@335 -- # read -ra ver1 00:15:44.748 09:53:33 -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.748 09:53:33 -- scripts/common.sh@336 -- # read -ra ver2 00:15:44.748 09:53:33 -- scripts/common.sh@337 -- # local 'op=<' 00:15:44.748 09:53:33 -- scripts/common.sh@339 -- # ver1_l=2 00:15:44.748 09:53:33 -- scripts/common.sh@340 -- # ver2_l=1 00:15:44.748 09:53:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:44.748 09:53:33 -- scripts/common.sh@343 -- # case "$op" in 00:15:44.748 09:53:33 -- scripts/common.sh@344 -- # : 1 00:15:44.748 09:53:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:44.748 09:53:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.748 09:53:33 -- scripts/common.sh@364 -- # decimal 1 00:15:44.748 09:53:33 -- scripts/common.sh@352 -- # local d=1 00:15:44.748 09:53:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.748 09:53:33 -- scripts/common.sh@354 -- # echo 1 00:15:44.748 09:53:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:44.748 09:53:33 -- scripts/common.sh@365 -- # decimal 2 00:15:44.748 09:53:33 -- scripts/common.sh@352 -- # local d=2 00:15:44.748 09:53:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.748 09:53:33 -- scripts/common.sh@354 -- # echo 2 00:15:44.748 09:53:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:44.748 09:53:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:44.748 09:53:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:44.748 09:53:33 -- scripts/common.sh@367 -- # return 0 00:15:44.748 09:53:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.748 --rc genhtml_branch_coverage=1 00:15:44.748 --rc genhtml_function_coverage=1 00:15:44.748 --rc genhtml_legend=1 00:15:44.748 --rc geninfo_all_blocks=1 00:15:44.748 --rc geninfo_unexecuted_blocks=1 00:15:44.748 00:15:44.748 ' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.748 --rc genhtml_branch_coverage=1 00:15:44.748 --rc genhtml_function_coverage=1 00:15:44.748 --rc genhtml_legend=1 00:15:44.748 --rc geninfo_all_blocks=1 00:15:44.748 --rc geninfo_unexecuted_blocks=1 00:15:44.748 00:15:44.748 ' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.748 --rc genhtml_branch_coverage=1 00:15:44.748 --rc genhtml_function_coverage=1 00:15:44.748 --rc genhtml_legend=1 00:15:44.748 --rc geninfo_all_blocks=1 00:15:44.748 --rc geninfo_unexecuted_blocks=1 00:15:44.748 00:15:44.748 ' 00:15:44.748 09:53:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:44.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.748 --rc genhtml_branch_coverage=1 00:15:44.748 --rc genhtml_function_coverage=1 00:15:44.748 --rc genhtml_legend=1 00:15:44.748 --rc geninfo_all_blocks=1 00:15:44.748 --rc geninfo_unexecuted_blocks=1 00:15:44.748 00:15:44.748 ' 00:15:44.748 09:53:33 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:44.748 09:53:33 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:44.748 09:53:33 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.748 09:53:33 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.748 09:53:33 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:44.748 09:53:33 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:44.748 09:53:33 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.748 09:53:33 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:44.748 09:53:33 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:44.748 09:53:33 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.748 09:53:33 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.748 09:53:33 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:44.748 09:53:33 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:44.748 09:53:33 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:44.748 09:53:33 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:44.748 09:53:33 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:44.748 09:53:33 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:44.748 09:53:33 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.748 09:53:33 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.748 09:53:33 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:44.748 09:53:33 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:44.748 09:53:33 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:44.748 09:53:33 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:44.748 09:53:33 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:44.748 09:53:33 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:44.748 09:53:33 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:44.748 09:53:33 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:44.748 09:53:33 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.748 09:53:33 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:45.010 09:53:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.010 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71520 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@22 -- # waitforlisten 71520 00:15:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.010 09:53:33 -- common/autotest_common.sh@829 -- # '[' -z 71520 ']' 00:15:45.010 09:53:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.010 09:53:33 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:45.010 09:53:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.010 09:53:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.010 09:53:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.010 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:15:45.010 [2024-12-15 09:53:33.836367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:45.010 [2024-12-15 09:53:33.836511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71520 ] 00:15:45.010 [2024-12-15 09:53:33.989870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.271 [2024-12-15 09:53:34.225706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.844 09:53:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.844 09:53:34 -- common/autotest_common.sh@862 -- # return 0 00:15:45.844 09:53:34 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:45.844 09:53:34 -- ftl/common.sh@54 -- # local name=nvme0 00:15:45.844 09:53:34 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:45.844 09:53:34 -- ftl/common.sh@56 -- # local size=103424 00:15:45.844 09:53:34 -- ftl/common.sh@59 -- # local base_bdev 00:15:45.844 09:53:34 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:46.106 09:53:34 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:46.106 09:53:34 -- ftl/common.sh@62 -- # local base_size 00:15:46.106 09:53:34 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:46.106 09:53:34 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:46.106 09:53:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:46.106 09:53:34 -- common/autotest_common.sh@1369 -- # local bs 00:15:46.106 09:53:34 -- common/autotest_common.sh@1370 -- # local nb 00:15:46.106 09:53:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:46.367 09:53:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:46.367 { 00:15:46.367 "name": "nvme0n1", 00:15:46.367 "aliases": [ 00:15:46.367 "269f0804-ed0f-481e-b7d5-2765040b88b6" 00:15:46.367 ], 00:15:46.367 "product_name": "NVMe disk", 00:15:46.367 "block_size": 4096, 00:15:46.367 "num_blocks": 1310720, 00:15:46.367 "uuid": "269f0804-ed0f-481e-b7d5-2765040b88b6", 00:15:46.367 "assigned_rate_limits": { 00:15:46.367 "rw_ios_per_sec": 0, 00:15:46.367 "rw_mbytes_per_sec": 0, 00:15:46.367 "r_mbytes_per_sec": 0, 00:15:46.367 "w_mbytes_per_sec": 0 00:15:46.367 }, 00:15:46.367 "claimed": true, 00:15:46.367 "claim_type": "read_many_write_one", 00:15:46.367 "zoned": false, 00:15:46.367 "supported_io_types": { 00:15:46.367 "read": true, 00:15:46.367 "write": true, 00:15:46.367 "unmap": true, 00:15:46.367 "write_zeroes": true, 00:15:46.367 "flush": true, 00:15:46.367 "reset": true, 00:15:46.367 "compare": true, 00:15:46.367 "compare_and_write": false, 00:15:46.367 "abort": true, 00:15:46.367 "nvme_admin": true, 00:15:46.367 "nvme_io": true 00:15:46.367 }, 00:15:46.367 "driver_specific": { 00:15:46.367 "nvme": [ 00:15:46.367 { 00:15:46.367 "pci_address": "0000:00:07.0", 00:15:46.367 "trid": { 00:15:46.367 "trtype": "PCIe", 00:15:46.367 "traddr": "0000:00:07.0" 00:15:46.367 }, 00:15:46.367 "ctrlr_data": { 00:15:46.367 "cntlid": 0, 00:15:46.367 "vendor_id": "0x1b36", 00:15:46.367 "model_number": "QEMU NVMe Ctrl", 00:15:46.367 "serial_number": "12341", 00:15:46.367 "firmware_revision": "8.0.0", 00:15:46.367 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:46.367 "oacs": { 00:15:46.367 "security": 0, 00:15:46.367 "format": 1, 00:15:46.367 "firmware": 0, 00:15:46.367 "ns_manage": 1 00:15:46.367 }, 00:15:46.367 "multi_ctrlr": false, 00:15:46.367 "ana_reporting": false 00:15:46.367 }, 00:15:46.367 "vs": { 00:15:46.367 "nvme_version": "1.4" 00:15:46.367 }, 00:15:46.367 "ns_data": { 00:15:46.367 "id": 1, 00:15:46.367 "can_share": false 00:15:46.367 } 00:15:46.367 } 00:15:46.367 ], 00:15:46.367 "mp_policy": "active_passive" 00:15:46.367 } 00:15:46.367 } 00:15:46.367 ]' 00:15:46.367 09:53:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:46.367 09:53:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:46.367 09:53:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:46.367 09:53:35 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:46.367 09:53:35 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:46.367 09:53:35 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:46.367 09:53:35 -- ftl/common.sh@63 -- # base_size=5120 00:15:46.367 09:53:35 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:46.367 09:53:35 -- ftl/common.sh@67 -- # clear_lvols 00:15:46.367 09:53:35 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:46.367 09:53:35 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:46.628 09:53:35 -- ftl/common.sh@28 -- # stores=5c7447b5-e66a-4ddb-9d84-f467e82ea994 00:15:46.628 09:53:35 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:46.628 09:53:35 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c7447b5-e66a-4ddb-9d84-f467e82ea994 00:15:46.628 09:53:35 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:46.889 09:53:35 -- ftl/common.sh@68 -- # lvs=0f566465-716d-4571-8c36-35971a44efe0 00:15:46.889 09:53:35 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f566465-716d-4571-8c36-35971a44efe0 00:15:47.148 09:53:36 -- ftl/bdevperf.sh@23 -- # split_bdev=71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.148 09:53:36 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.148 09:53:36 -- ftl/common.sh@35 -- # local name=nvc0 00:15:47.148 09:53:36 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:47.148 09:53:36 -- ftl/common.sh@37 -- # local base_bdev=71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.148 09:53:36 -- ftl/common.sh@38 -- # local cache_size= 00:15:47.148 09:53:36 -- ftl/common.sh@41 -- # get_bdev_size 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.148 09:53:36 -- common/autotest_common.sh@1367 -- # local bdev_name=71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.148 09:53:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:47.148 09:53:36 -- common/autotest_common.sh@1369 -- # local bs 00:15:47.148 09:53:36 -- common/autotest_common.sh@1370 -- # local nb 00:15:47.148 09:53:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.407 09:53:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:47.407 { 00:15:47.407 "name": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:47.407 "aliases": [ 00:15:47.407 "lvs/nvme0n1p0" 00:15:47.407 ], 00:15:47.407 "product_name": "Logical Volume", 00:15:47.407 "block_size": 4096, 00:15:47.407 "num_blocks": 26476544, 00:15:47.407 "uuid": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:47.407 "assigned_rate_limits": { 00:15:47.407 "rw_ios_per_sec": 0, 00:15:47.407 "rw_mbytes_per_sec": 0, 00:15:47.407 "r_mbytes_per_sec": 0, 00:15:47.407 "w_mbytes_per_sec": 0 00:15:47.407 }, 00:15:47.407 "claimed": false, 00:15:47.407 "zoned": false, 00:15:47.407 "supported_io_types": { 00:15:47.407 "read": true, 00:15:47.407 "write": true, 00:15:47.407 "unmap": true, 00:15:47.407 "write_zeroes": true, 00:15:47.407 "flush": false, 00:15:47.407 "reset": true, 00:15:47.407 "compare": false, 00:15:47.407 "compare_and_write": false, 00:15:47.407 "abort": false, 00:15:47.407 "nvme_admin": false, 00:15:47.407 "nvme_io": false 00:15:47.407 }, 00:15:47.407 "driver_specific": { 00:15:47.407 "lvol": { 00:15:47.407 "lvol_store_uuid": "0f566465-716d-4571-8c36-35971a44efe0", 00:15:47.407 "base_bdev": "nvme0n1", 00:15:47.407 "thin_provision": true, 00:15:47.407 "snapshot": false, 00:15:47.407 "clone": false, 00:15:47.407 "esnap_clone": false 00:15:47.407 } 00:15:47.407 } 00:15:47.407 } 00:15:47.407 ]' 00:15:47.407 09:53:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:47.407 09:53:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:47.407 09:53:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:47.407 09:53:36 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:47.407 09:53:36 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:47.407 09:53:36 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:47.407 09:53:36 -- ftl/common.sh@41 -- # local base_size=5171 00:15:47.407 09:53:36 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:47.407 09:53:36 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:47.669 09:53:36 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:47.669 09:53:36 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:47.669 09:53:36 -- ftl/common.sh@48 -- # get_bdev_size 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.669 09:53:36 -- common/autotest_common.sh@1367 -- # local bdev_name=71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.669 09:53:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:47.669 09:53:36 -- common/autotest_common.sh@1369 -- # local bs 00:15:47.669 09:53:36 -- common/autotest_common.sh@1370 -- # local nb 00:15:47.669 09:53:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:47.926 09:53:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:47.926 { 00:15:47.926 "name": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:47.926 "aliases": [ 00:15:47.926 "lvs/nvme0n1p0" 00:15:47.926 ], 00:15:47.926 "product_name": "Logical Volume", 00:15:47.926 "block_size": 4096, 00:15:47.926 "num_blocks": 26476544, 00:15:47.926 "uuid": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:47.926 "assigned_rate_limits": { 00:15:47.926 "rw_ios_per_sec": 0, 00:15:47.926 "rw_mbytes_per_sec": 0, 00:15:47.926 "r_mbytes_per_sec": 0, 00:15:47.926 "w_mbytes_per_sec": 0 00:15:47.926 }, 00:15:47.926 "claimed": false, 00:15:47.926 "zoned": false, 00:15:47.926 "supported_io_types": { 00:15:47.926 "read": true, 00:15:47.926 "write": true, 00:15:47.926 "unmap": true, 00:15:47.926 "write_zeroes": true, 00:15:47.926 "flush": false, 00:15:47.926 "reset": true, 00:15:47.926 "compare": false, 00:15:47.926 "compare_and_write": false, 00:15:47.926 "abort": false, 00:15:47.926 "nvme_admin": false, 00:15:47.926 "nvme_io": false 00:15:47.926 }, 00:15:47.926 "driver_specific": { 00:15:47.926 "lvol": { 00:15:47.927 "lvol_store_uuid": "0f566465-716d-4571-8c36-35971a44efe0", 00:15:47.927 "base_bdev": "nvme0n1", 00:15:47.927 "thin_provision": true, 00:15:47.927 "snapshot": false, 00:15:47.927 "clone": false, 00:15:47.927 "esnap_clone": false 00:15:47.927 } 00:15:47.927 } 00:15:47.927 } 00:15:47.927 ]' 00:15:47.927 09:53:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:47.927 09:53:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:47.927 09:53:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:47.927 09:53:36 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:47.927 09:53:36 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:47.927 09:53:36 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:47.927 09:53:36 -- ftl/common.sh@48 -- # cache_size=5171 00:15:47.927 09:53:36 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:48.185 09:53:36 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:48.185 09:53:36 -- ftl/bdevperf.sh@26 -- # get_bdev_size 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:48.185 09:53:36 -- common/autotest_common.sh@1367 -- # local bdev_name=71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:48.185 09:53:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:48.185 09:53:36 -- common/autotest_common.sh@1369 -- # local bs 00:15:48.185 09:53:36 -- common/autotest_common.sh@1370 -- # local nb 00:15:48.185 09:53:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dfae1f-87f0-433e-b1d7-b283d6a0f071 00:15:48.185 09:53:37 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:48.185 { 00:15:48.185 "name": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:48.185 "aliases": [ 00:15:48.185 "lvs/nvme0n1p0" 00:15:48.185 ], 00:15:48.185 "product_name": "Logical Volume", 00:15:48.185 "block_size": 4096, 00:15:48.185 "num_blocks": 26476544, 00:15:48.185 "uuid": "71dfae1f-87f0-433e-b1d7-b283d6a0f071", 00:15:48.185 "assigned_rate_limits": { 00:15:48.185 "rw_ios_per_sec": 0, 00:15:48.185 "rw_mbytes_per_sec": 0, 00:15:48.185 "r_mbytes_per_sec": 0, 00:15:48.185 "w_mbytes_per_sec": 0 00:15:48.185 }, 00:15:48.185 "claimed": false, 00:15:48.185 "zoned": false, 00:15:48.185 "supported_io_types": { 00:15:48.185 "read": true, 00:15:48.185 "write": true, 00:15:48.185 "unmap": true, 00:15:48.185 "write_zeroes": true, 00:15:48.185 "flush": false, 00:15:48.185 "reset": true, 00:15:48.185 "compare": false, 00:15:48.185 "compare_and_write": false, 00:15:48.185 "abort": false, 00:15:48.185 "nvme_admin": false, 00:15:48.185 "nvme_io": false 00:15:48.185 }, 00:15:48.185 "driver_specific": { 00:15:48.185 "lvol": { 00:15:48.185 "lvol_store_uuid": "0f566465-716d-4571-8c36-35971a44efe0", 00:15:48.185 "base_bdev": "nvme0n1", 00:15:48.185 "thin_provision": true, 00:15:48.185 "snapshot": false, 00:15:48.185 "clone": false, 00:15:48.185 "esnap_clone": false 00:15:48.185 } 00:15:48.185 } 00:15:48.185 } 00:15:48.185 ]' 00:15:48.185 09:53:37 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:48.185 09:53:37 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:48.185 09:53:37 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:48.445 09:53:37 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:48.445 09:53:37 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:48.445 09:53:37 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:48.445 09:53:37 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:48.445 09:53:37 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 71dfae1f-87f0-433e-b1d7-b283d6a0f071 -c nvc0n1p0 --l2p_dram_limit 20 00:15:48.445 [2024-12-15 09:53:37.387647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.387687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:48.445 [2024-12-15 09:53:37.387699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.445 [2024-12-15 09:53:37.387706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.387748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.387756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.445 [2024-12-15 09:53:37.387764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:48.445 [2024-12-15 09:53:37.387770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.387784] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:48.445 [2024-12-15 09:53:37.388374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:48.445 [2024-12-15 09:53:37.388396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.388402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.445 [2024-12-15 09:53:37.388410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:15:48.445 [2024-12-15 09:53:37.388415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.388578] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0ea25e4f-2c6c-45d0-858d-26164b5ba4f4 00:15:48.445 [2024-12-15 09:53:37.389613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.389644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:48.445 [2024-12-15 09:53:37.389651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:48.445 [2024-12-15 09:53:37.389658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.394855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.394888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.445 [2024-12-15 09:53:37.394895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:15:48.445 [2024-12-15 09:53:37.394902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.394967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.394975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.445 [2024-12-15 09:53:37.394981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:48.445 [2024-12-15 09:53:37.394991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.395026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.395034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:48.445 [2024-12-15 09:53:37.395042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:48.445 [2024-12-15 09:53:37.395049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.395065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:48.445 [2024-12-15 09:53:37.398077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.398102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.445 [2024-12-15 09:53:37.398111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:15:48.445 [2024-12-15 09:53:37.398117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.398143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.398150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:48.445 [2024-12-15 09:53:37.398157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:48.445 [2024-12-15 09:53:37.398163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.398181] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:48.445 [2024-12-15 09:53:37.398281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:48.445 [2024-12-15 09:53:37.398298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:48.445 [2024-12-15 09:53:37.398306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:48.445 [2024-12-15 09:53:37.398316] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:48.445 [2024-12-15 09:53:37.398323] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:48.445 [2024-12-15 09:53:37.398330] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:48.445 [2024-12-15 09:53:37.398336] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:48.445 [2024-12-15 09:53:37.398345] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:48.445 [2024-12-15 09:53:37.398351] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:48.445 [2024-12-15 09:53:37.398359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.398364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:48.445 [2024-12-15 09:53:37.398371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:15:48.445 [2024-12-15 09:53:37.398376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.398423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.445 [2024-12-15 09:53:37.398428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:48.445 [2024-12-15 09:53:37.398435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:48.445 [2024-12-15 09:53:37.398440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.445 [2024-12-15 09:53:37.398494] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:48.445 [2024-12-15 09:53:37.398501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:48.445 [2024-12-15 09:53:37.398508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.445 [2024-12-15 09:53:37.398518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.445 [2024-12-15 09:53:37.398525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:48.445 [2024-12-15 09:53:37.398530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:48.445 [2024-12-15 09:53:37.398537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:48.445 [2024-12-15 09:53:37.398543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:48.445 [2024-12-15 09:53:37.398549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:48.445 [2024-12-15 09:53:37.398555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.445 [2024-12-15 09:53:37.398562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:48.445 [2024-12-15 09:53:37.398567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:48.445 [2024-12-15 09:53:37.398573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.445 [2024-12-15 09:53:37.398579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:48.445 [2024-12-15 09:53:37.398588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:48.445 [2024-12-15 09:53:37.398593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.445 [2024-12-15 09:53:37.398601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:48.446 [2024-12-15 09:53:37.398606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:48.446 [2024-12-15 09:53:37.398612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:48.446 [2024-12-15 09:53:37.398623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:48.446 [2024-12-15 09:53:37.398628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:48.446 [2024-12-15 09:53:37.398639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:48.446 [2024-12-15 09:53:37.398657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:48.446 [2024-12-15 09:53:37.398672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:48.446 [2024-12-15 09:53:37.398691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:48.446 [2024-12-15 09:53:37.398707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.446 [2024-12-15 09:53:37.398719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:48.446 [2024-12-15 09:53:37.398725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:48.446 [2024-12-15 09:53:37.398730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.446 [2024-12-15 09:53:37.398736] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:48.446 [2024-12-15 09:53:37.398742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:48.446 [2024-12-15 09:53:37.398748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.446 [2024-12-15 09:53:37.398760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:48.446 [2024-12-15 09:53:37.398766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:48.446 [2024-12-15 09:53:37.398773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:48.446 [2024-12-15 09:53:37.398779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:48.446 [2024-12-15 09:53:37.398786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:48.446 [2024-12-15 09:53:37.398791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:48.446 [2024-12-15 09:53:37.398798] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:48.446 [2024-12-15 09:53:37.398805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.446 [2024-12-15 09:53:37.398814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:48.446 [2024-12-15 09:53:37.398820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:48.446 [2024-12-15 09:53:37.398827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:48.446 [2024-12-15 09:53:37.398832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:48.446 [2024-12-15 09:53:37.398839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:48.446 [2024-12-15 09:53:37.398845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:48.446 [2024-12-15 09:53:37.398851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:48.446 [2024-12-15 09:53:37.398856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:48.446 [2024-12-15 09:53:37.398863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:48.446 [2024-12-15 09:53:37.398868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:48.446 [2024-12-15 09:53:37.398876] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:48.446 [2024-12-15 09:53:37.398881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:48.446 [2024-12-15 09:53:37.398889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:48.446 [2024-12-15 09:53:37.398894] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:48.446 [2024-12-15 09:53:37.398902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.446 [2024-12-15 09:53:37.398907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:48.446 [2024-12-15 09:53:37.398915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:48.446 [2024-12-15 09:53:37.398920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:48.446 [2024-12-15 09:53:37.398927] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:48.446 [2024-12-15 09:53:37.398932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.398939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:48.446 [2024-12-15 09:53:37.398944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:15:48.446 [2024-12-15 09:53:37.398951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.411224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.411269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.446 [2024-12-15 09:53:37.411278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.237 ms 00:15:48.446 [2024-12-15 09:53:37.411286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.411352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.411362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:48.446 [2024-12-15 09:53:37.411368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:48.446 [2024-12-15 09:53:37.411375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.453741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.453776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.446 [2024-12-15 09:53:37.453785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.333 ms 00:15:48.446 [2024-12-15 09:53:37.453792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.453818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.453829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.446 [2024-12-15 09:53:37.453836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:48.446 [2024-12-15 09:53:37.453843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.454185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.454209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.446 [2024-12-15 09:53:37.454216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:15:48.446 [2024-12-15 09:53:37.454223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.446 [2024-12-15 09:53:37.454319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.446 [2024-12-15 09:53:37.454335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.446 [2024-12-15 09:53:37.454343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:15:48.446 [2024-12-15 09:53:37.454350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.705 [2024-12-15 09:53:37.465953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.705 [2024-12-15 09:53:37.465983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.705 [2024-12-15 09:53:37.465992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.591 ms 00:15:48.705 [2024-12-15 09:53:37.465999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.705 [2024-12-15 09:53:37.475271] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:48.705 [2024-12-15 09:53:37.479690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.705 [2024-12-15 09:53:37.479717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:48.705 [2024-12-15 09:53:37.479727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:15:48.705 [2024-12-15 09:53:37.479733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.705 [2024-12-15 09:53:37.545419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.705 [2024-12-15 09:53:37.545472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:48.705 [2024-12-15 09:53:37.545487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.660 ms 00:15:48.706 [2024-12-15 09:53:37.545495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.706 [2024-12-15 09:53:37.545535] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:48.706 [2024-12-15 09:53:37.545547] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:52.913 [2024-12-15 09:53:41.054780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.054860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:52.913 [2024-12-15 09:53:41.054882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3509.224 ms 00:15:52.913 [2024-12-15 09:53:41.054892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.055093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.055104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:52.913 [2024-12-15 09:53:41.055116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:15:52.913 [2024-12-15 09:53:41.055125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.081780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.081830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:52.913 [2024-12-15 09:53:41.081847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.597 ms 00:15:52.913 [2024-12-15 09:53:41.081858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.106979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.107024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:52.913 [2024-12-15 09:53:41.107041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.086 ms 00:15:52.913 [2024-12-15 09:53:41.107048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.107388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.107401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:52.913 [2024-12-15 09:53:41.107413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:15:52.913 [2024-12-15 09:53:41.107421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.176151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.176199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:52.913 [2024-12-15 09:53:41.176214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.688 ms 00:15:52.913 [2024-12-15 09:53:41.176223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.203276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.203322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:52.913 [2024-12-15 09:53:41.203336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.987 ms 00:15:52.913 [2024-12-15 09:53:41.203344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.204808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.204854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:52.913 [2024-12-15 09:53:41.204869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:15:52.913 [2024-12-15 09:53:41.204880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.231225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.231278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:52.913 [2024-12-15 09:53:41.231292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.300 ms 00:15:52.913 [2024-12-15 09:53:41.231299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.231349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.231359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:52.913 [2024-12-15 09:53:41.231373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:52.913 [2024-12-15 09:53:41.231382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.231473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.913 [2024-12-15 09:53:41.231485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:52.913 [2024-12-15 09:53:41.231496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:52.913 [2024-12-15 09:53:41.231503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.913 [2024-12-15 09:53:41.232664] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3844.502 ms, result 0 00:15:52.913 { 00:15:52.913 "name": "ftl0", 00:15:52.913 "uuid": "0ea25e4f-2c6c-45d0-858d-26164b5ba4f4" 00:15:52.913 } 00:15:52.913 09:53:41 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:52.913 09:53:41 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:52.913 09:53:41 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:52.913 09:53:41 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:52.913 [2024-12-15 09:53:41.549003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:52.913 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:52.913 Zero copy mechanism will not be used. 00:15:52.913 Running I/O for 4 seconds... 00:15:57.127 00:15:57.127 Latency(us) 00:15:57.127 [2024-12-15T09:53:46.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.127 [2024-12-15T09:53:46.143Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:57.127 ftl0 : 4.00 1246.40 82.77 0.00 0.00 840.36 203.22 2218.14 00:15:57.127 [2024-12-15T09:53:46.143Z] =================================================================================================================== 00:15:57.127 [2024-12-15T09:53:46.143Z] Total : 1246.40 82.77 0.00 0.00 840.36 203.22 2218.14 00:15:57.127 [2024-12-15 09:53:45.557774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:57.127 0 00:15:57.127 09:53:45 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:57.127 [2024-12-15 09:53:45.670236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:57.127 Running I/O for 4 seconds... 00:16:01.337 00:16:01.337 Latency(us) 00:16:01.337 [2024-12-15T09:53:50.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.337 [2024-12-15T09:53:50.353Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.337 ftl0 : 4.03 5466.46 21.35 0.00 0.00 23321.17 299.32 45572.73 00:16:01.337 [2024-12-15T09:53:50.353Z] =================================================================================================================== 00:16:01.337 [2024-12-15T09:53:50.353Z] Total : 5466.46 21.35 0.00 0.00 23321.17 0.00 45572.73 00:16:01.337 [2024-12-15 09:53:49.712587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:01.337 0 00:16:01.337 09:53:49 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:01.337 [2024-12-15 09:53:49.807519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:01.337 Running I/O for 4 seconds... 00:16:05.551 00:16:05.551 Latency(us) 00:16:05.551 [2024-12-15T09:53:54.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.551 [2024-12-15T09:53:54.567Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.551 Verification LBA range: start 0x0 length 0x1400000 00:16:05.551 ftl0 : 4.01 8894.60 34.74 0.00 0.00 14349.96 203.22 35288.62 00:16:05.551 [2024-12-15T09:53:54.567Z] =================================================================================================================== 00:16:05.551 [2024-12-15T09:53:54.567Z] Total : 8894.60 34.74 0.00 0.00 14349.96 0.00 35288.62 00:16:05.551 [2024-12-15 09:53:53.837403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:05.551 0 00:16:05.551 09:53:53 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:05.551 [2024-12-15 09:53:53.987487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:53.987525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:05.551 [2024-12-15 09:53:53.987538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:05.551 [2024-12-15 09:53:53.987546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:53.987566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:05.551 [2024-12-15 09:53:53.990102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:53.990134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:05.551 [2024-12-15 09:53:53.990143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:16:05.551 [2024-12-15 09:53:53.990155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:53.992524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:53.992558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:05.551 [2024-12-15 09:53:53.992566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:16:05.551 [2024-12-15 09:53:53.992575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.179303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.179359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:05.551 [2024-12-15 09:53:54.179375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 186.711 ms 00:16:05.551 [2024-12-15 09:53:54.179385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.185727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.185761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:05.551 [2024-12-15 09:53:54.185770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.307 ms 00:16:05.551 [2024-12-15 09:53:54.185780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.210849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.210890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:05.551 [2024-12-15 09:53:54.210902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.008 ms 00:16:05.551 [2024-12-15 09:53:54.210914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.227014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.227058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:05.551 [2024-12-15 09:53:54.227069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.065 ms 00:16:05.551 [2024-12-15 09:53:54.227079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.227221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.227235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:05.551 [2024-12-15 09:53:54.227245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:16:05.551 [2024-12-15 09:53:54.227271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.551 [2024-12-15 09:53:54.252213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.551 [2024-12-15 09:53:54.252270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:05.551 [2024-12-15 09:53:54.252280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:16:05.552 [2024-12-15 09:53:54.252289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.552 [2024-12-15 09:53:54.277397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.552 [2024-12-15 09:53:54.277451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:05.552 [2024-12-15 09:53:54.277462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.068 ms 00:16:05.552 [2024-12-15 09:53:54.277473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.552 [2024-12-15 09:53:54.302365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.552 [2024-12-15 09:53:54.302415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:05.552 [2024-12-15 09:53:54.302425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.849 ms 00:16:05.552 [2024-12-15 09:53:54.302434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.552 [2024-12-15 09:53:54.327449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.552 [2024-12-15 09:53:54.327499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:05.552 [2024-12-15 09:53:54.327509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:16:05.552 [2024-12-15 09:53:54.327518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.552 [2024-12-15 09:53:54.327562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:05.552 [2024-12-15 09:53:54.327580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.327996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:05.552 [2024-12-15 09:53:54.328286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:05.553 [2024-12-15 09:53:54.328516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:05.553 [2024-12-15 09:53:54.328524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0ea25e4f-2c6c-45d0-858d-26164b5ba4f4 00:16:05.553 [2024-12-15 09:53:54.328536] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:05.553 [2024-12-15 09:53:54.328544] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:05.553 [2024-12-15 09:53:54.328553] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:05.553 [2024-12-15 09:53:54.328562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:05.553 [2024-12-15 09:53:54.328570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:05.553 [2024-12-15 09:53:54.328581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:05.553 [2024-12-15 09:53:54.328590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:05.553 [2024-12-15 09:53:54.328596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:05.553 [2024-12-15 09:53:54.328605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:05.553 [2024-12-15 09:53:54.328612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.553 [2024-12-15 09:53:54.328621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:05.553 [2024-12-15 09:53:54.328631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:16:05.553 [2024-12-15 09:53:54.328640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.342304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.553 [2024-12-15 09:53:54.342349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:05.553 [2024-12-15 09:53:54.342360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.604 ms 00:16:05.553 [2024-12-15 09:53:54.342376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.342604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.553 [2024-12-15 09:53:54.342616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:05.553 [2024-12-15 09:53:54.342624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:05.553 [2024-12-15 09:53:54.342633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.383403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.383440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:05.553 [2024-12-15 09:53:54.383452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.383462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.383513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.383522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:05.553 [2024-12-15 09:53:54.383529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.383537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.383593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.383604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:05.553 [2024-12-15 09:53:54.383611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.383624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.383638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.383647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:05.553 [2024-12-15 09:53:54.383654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.383662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.456107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.456147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:05.553 [2024-12-15 09:53:54.456157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.456168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:05.553 [2024-12-15 09:53:54.484521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.553 [2024-12-15 09:53:54.484601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.553 [2024-12-15 09:53:54.484686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.553 [2024-12-15 09:53:54.484797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:05.553 [2024-12-15 09:53:54.484859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.553 [2024-12-15 09:53:54.484918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.484971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.553 [2024-12-15 09:53:54.484982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.553 [2024-12-15 09:53:54.484989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.553 [2024-12-15 09:53:54.484998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.553 [2024-12-15 09:53:54.485117] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.593 ms, result 0 00:16:05.553 true 00:16:05.553 09:53:54 -- ftl/bdevperf.sh@37 -- # killprocess 71520 00:16:05.553 09:53:54 -- common/autotest_common.sh@936 -- # '[' -z 71520 ']' 00:16:05.553 09:53:54 -- common/autotest_common.sh@940 -- # kill -0 71520 00:16:05.553 09:53:54 -- common/autotest_common.sh@941 -- # uname 00:16:05.553 09:53:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.553 09:53:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71520 00:16:05.553 killing process with pid 71520 00:16:05.553 Received shutdown signal, test time was about 4.000000 seconds 00:16:05.553 00:16:05.553 Latency(us) 00:16:05.553 [2024-12-15T09:53:54.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.553 [2024-12-15T09:53:54.569Z] =================================================================================================================== 00:16:05.553 [2024-12-15T09:53:54.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.554 09:53:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.554 09:53:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.554 09:53:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71520' 00:16:05.554 09:53:54 -- common/autotest_common.sh@955 -- # kill 71520 00:16:05.554 09:53:54 -- common/autotest_common.sh@960 -- # wait 71520 00:16:06.493 09:53:55 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:06.493 09:53:55 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:06.493 09:53:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.493 09:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.493 Remove shared memory files 00:16:06.493 09:53:55 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:06.494 09:53:55 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:06.494 09:53:55 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:06.494 09:53:55 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:06.494 09:53:55 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:06.494 09:53:55 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:06.494 09:53:55 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:06.494 00:16:06.494 real 0m21.803s 00:16:06.494 user 0m24.064s 00:16:06.494 sys 0m1.001s 00:16:06.494 ************************************ 00:16:06.494 END TEST ftl_bdevperf 00:16:06.494 09:53:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:06.494 09:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.494 ************************************ 00:16:06.494 09:53:55 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:06.494 09:53:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:06.494 09:53:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.494 09:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.494 ************************************ 00:16:06.494 START TEST ftl_trim 00:16:06.494 ************************************ 00:16:06.494 09:53:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:06.754 * Looking for test storage... 00:16:06.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.754 09:53:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:06.754 09:53:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:06.754 09:53:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:06.754 09:53:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:06.754 09:53:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:06.754 09:53:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:06.754 09:53:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:06.754 09:53:55 -- scripts/common.sh@335 -- # IFS=.-: 00:16:06.754 09:53:55 -- scripts/common.sh@335 -- # read -ra ver1 00:16:06.754 09:53:55 -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.754 09:53:55 -- scripts/common.sh@336 -- # read -ra ver2 00:16:06.754 09:53:55 -- scripts/common.sh@337 -- # local 'op=<' 00:16:06.754 09:53:55 -- scripts/common.sh@339 -- # ver1_l=2 00:16:06.754 09:53:55 -- scripts/common.sh@340 -- # ver2_l=1 00:16:06.754 09:53:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:06.754 09:53:55 -- scripts/common.sh@343 -- # case "$op" in 00:16:06.754 09:53:55 -- scripts/common.sh@344 -- # : 1 00:16:06.754 09:53:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:06.754 09:53:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.754 09:53:55 -- scripts/common.sh@364 -- # decimal 1 00:16:06.754 09:53:55 -- scripts/common.sh@352 -- # local d=1 00:16:06.754 09:53:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.754 09:53:55 -- scripts/common.sh@354 -- # echo 1 00:16:06.754 09:53:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:06.754 09:53:55 -- scripts/common.sh@365 -- # decimal 2 00:16:06.754 09:53:55 -- scripts/common.sh@352 -- # local d=2 00:16:06.754 09:53:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.754 09:53:55 -- scripts/common.sh@354 -- # echo 2 00:16:06.754 09:53:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:06.754 09:53:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:06.754 09:53:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:06.754 09:53:55 -- scripts/common.sh@367 -- # return 0 00:16:06.754 09:53:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.754 09:53:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:06.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.754 --rc genhtml_branch_coverage=1 00:16:06.755 --rc genhtml_function_coverage=1 00:16:06.755 --rc genhtml_legend=1 00:16:06.755 --rc geninfo_all_blocks=1 00:16:06.755 --rc geninfo_unexecuted_blocks=1 00:16:06.755 00:16:06.755 ' 00:16:06.755 09:53:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:06.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.755 --rc genhtml_branch_coverage=1 00:16:06.755 --rc genhtml_function_coverage=1 00:16:06.755 --rc genhtml_legend=1 00:16:06.755 --rc geninfo_all_blocks=1 00:16:06.755 --rc geninfo_unexecuted_blocks=1 00:16:06.755 00:16:06.755 ' 00:16:06.755 09:53:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:06.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.755 --rc genhtml_branch_coverage=1 00:16:06.755 --rc genhtml_function_coverage=1 00:16:06.755 --rc genhtml_legend=1 00:16:06.755 --rc geninfo_all_blocks=1 00:16:06.755 --rc geninfo_unexecuted_blocks=1 00:16:06.755 00:16:06.755 ' 00:16:06.755 09:53:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:06.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.755 --rc genhtml_branch_coverage=1 00:16:06.755 --rc genhtml_function_coverage=1 00:16:06.755 --rc genhtml_legend=1 00:16:06.755 --rc geninfo_all_blocks=1 00:16:06.755 --rc geninfo_unexecuted_blocks=1 00:16:06.755 00:16:06.755 ' 00:16:06.755 09:53:55 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:06.755 09:53:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:06.755 09:53:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.755 09:53:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.755 09:53:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:06.755 09:53:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:06.755 09:53:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.755 09:53:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:06.755 09:53:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:06.755 09:53:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.755 09:53:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.755 09:53:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:06.755 09:53:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:06.755 09:53:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.755 09:53:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.755 09:53:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:06.755 09:53:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:06.755 09:53:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.755 09:53:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.755 09:53:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:06.755 09:53:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:06.755 09:53:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.755 09:53:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.755 09:53:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.755 09:53:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.755 09:53:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:06.755 09:53:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:06.755 09:53:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.755 09:53:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.755 09:53:55 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.755 09:53:55 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:06.755 09:53:55 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:06.755 09:53:55 -- ftl/trim.sh@25 -- # timeout=240 00:16:06.755 09:53:55 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:06.755 09:53:55 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:06.755 09:53:55 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:06.755 09:53:55 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:06.755 09:53:55 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:06.755 09:53:55 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:06.755 09:53:55 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:06.755 09:53:55 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:06.755 09:53:55 -- ftl/trim.sh@40 -- # svcpid=71880 00:16:06.755 09:53:55 -- ftl/trim.sh@41 -- # waitforlisten 71880 00:16:06.755 09:53:55 -- common/autotest_common.sh@829 -- # '[' -z 71880 ']' 00:16:06.755 09:53:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.755 09:53:55 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:06.755 09:53:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.755 09:53:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.755 09:53:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.755 09:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.755 [2024-12-15 09:53:55.694443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:06.755 [2024-12-15 09:53:55.694899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71880 ] 00:16:07.015 [2024-12-15 09:53:55.844795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.274 [2024-12-15 09:53:56.071338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.274 [2024-12-15 09:53:56.072001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.274 [2024-12-15 09:53:56.072380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.274 [2024-12-15 09:53:56.072681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.216 09:53:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.216 09:53:57 -- common/autotest_common.sh@862 -- # return 0 00:16:08.216 09:53:57 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:08.216 09:53:57 -- ftl/common.sh@54 -- # local name=nvme0 00:16:08.216 09:53:57 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:08.216 09:53:57 -- ftl/common.sh@56 -- # local size=103424 00:16:08.216 09:53:57 -- ftl/common.sh@59 -- # local base_bdev 00:16:08.216 09:53:57 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:08.477 09:53:57 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:08.477 09:53:57 -- ftl/common.sh@62 -- # local base_size 00:16:08.477 09:53:57 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:08.477 09:53:57 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:08.478 09:53:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:08.478 09:53:57 -- common/autotest_common.sh@1369 -- # local bs 00:16:08.478 09:53:57 -- common/autotest_common.sh@1370 -- # local nb 00:16:08.478 09:53:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:08.740 09:53:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:08.740 { 00:16:08.740 "name": "nvme0n1", 00:16:08.740 "aliases": [ 00:16:08.740 "e22216ee-29df-41f2-8c0c-9abed05279d6" 00:16:08.740 ], 00:16:08.740 "product_name": "NVMe disk", 00:16:08.740 "block_size": 4096, 00:16:08.740 "num_blocks": 1310720, 00:16:08.740 "uuid": "e22216ee-29df-41f2-8c0c-9abed05279d6", 00:16:08.740 "assigned_rate_limits": { 00:16:08.740 "rw_ios_per_sec": 0, 00:16:08.740 "rw_mbytes_per_sec": 0, 00:16:08.740 "r_mbytes_per_sec": 0, 00:16:08.740 "w_mbytes_per_sec": 0 00:16:08.740 }, 00:16:08.740 "claimed": true, 00:16:08.740 "claim_type": "read_many_write_one", 00:16:08.740 "zoned": false, 00:16:08.740 "supported_io_types": { 00:16:08.740 "read": true, 00:16:08.740 "write": true, 00:16:08.740 "unmap": true, 00:16:08.740 "write_zeroes": true, 00:16:08.740 "flush": true, 00:16:08.740 "reset": true, 00:16:08.740 "compare": true, 00:16:08.740 "compare_and_write": false, 00:16:08.740 "abort": true, 00:16:08.740 "nvme_admin": true, 00:16:08.740 "nvme_io": true 00:16:08.740 }, 00:16:08.740 "driver_specific": { 00:16:08.740 "nvme": [ 00:16:08.740 { 00:16:08.740 "pci_address": "0000:00:07.0", 00:16:08.740 "trid": { 00:16:08.740 "trtype": "PCIe", 00:16:08.740 "traddr": "0000:00:07.0" 00:16:08.740 }, 00:16:08.740 "ctrlr_data": { 00:16:08.740 "cntlid": 0, 00:16:08.740 "vendor_id": "0x1b36", 00:16:08.740 "model_number": "QEMU NVMe Ctrl", 00:16:08.740 "serial_number": "12341", 00:16:08.740 "firmware_revision": "8.0.0", 00:16:08.740 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:08.740 "oacs": { 00:16:08.740 "security": 0, 00:16:08.740 "format": 1, 00:16:08.740 "firmware": 0, 00:16:08.740 "ns_manage": 1 00:16:08.740 }, 00:16:08.740 "multi_ctrlr": false, 00:16:08.740 "ana_reporting": false 00:16:08.740 }, 00:16:08.740 "vs": { 00:16:08.740 "nvme_version": "1.4" 00:16:08.740 }, 00:16:08.740 "ns_data": { 00:16:08.740 "id": 1, 00:16:08.740 "can_share": false 00:16:08.740 } 00:16:08.740 } 00:16:08.740 ], 00:16:08.740 "mp_policy": "active_passive" 00:16:08.740 } 00:16:08.740 } 00:16:08.740 ]' 00:16:08.740 09:53:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:08.740 09:53:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:08.740 09:53:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:08.740 09:53:57 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:08.740 09:53:57 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:08.740 09:53:57 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:08.740 09:53:57 -- ftl/common.sh@63 -- # base_size=5120 00:16:08.740 09:53:57 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:08.740 09:53:57 -- ftl/common.sh@67 -- # clear_lvols 00:16:08.740 09:53:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:08.740 09:53:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:09.001 09:53:57 -- ftl/common.sh@28 -- # stores=0f566465-716d-4571-8c36-35971a44efe0 00:16:09.002 09:53:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:09.002 09:53:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f566465-716d-4571-8c36-35971a44efe0 00:16:09.263 09:53:58 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:09.263 09:53:58 -- ftl/common.sh@68 -- # lvs=ce76b67e-461b-492a-bff4-4a36ad565359 00:16:09.263 09:53:58 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ce76b67e-461b-492a-bff4-4a36ad565359 00:16:09.525 09:53:58 -- ftl/trim.sh@43 -- # split_bdev=ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.525 09:53:58 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.525 09:53:58 -- ftl/common.sh@35 -- # local name=nvc0 00:16:09.525 09:53:58 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:09.525 09:53:58 -- ftl/common.sh@37 -- # local base_bdev=ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.525 09:53:58 -- ftl/common.sh@38 -- # local cache_size= 00:16:09.525 09:53:58 -- ftl/common.sh@41 -- # get_bdev_size ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.525 09:53:58 -- common/autotest_common.sh@1367 -- # local bdev_name=ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.525 09:53:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:09.525 09:53:58 -- common/autotest_common.sh@1369 -- # local bs 00:16:09.525 09:53:58 -- common/autotest_common.sh@1370 -- # local nb 00:16:09.525 09:53:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:09.786 09:53:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:09.786 { 00:16:09.786 "name": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:09.786 "aliases": [ 00:16:09.786 "lvs/nvme0n1p0" 00:16:09.786 ], 00:16:09.786 "product_name": "Logical Volume", 00:16:09.786 "block_size": 4096, 00:16:09.786 "num_blocks": 26476544, 00:16:09.786 "uuid": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:09.786 "assigned_rate_limits": { 00:16:09.786 "rw_ios_per_sec": 0, 00:16:09.786 "rw_mbytes_per_sec": 0, 00:16:09.786 "r_mbytes_per_sec": 0, 00:16:09.786 "w_mbytes_per_sec": 0 00:16:09.786 }, 00:16:09.786 "claimed": false, 00:16:09.786 "zoned": false, 00:16:09.786 "supported_io_types": { 00:16:09.786 "read": true, 00:16:09.786 "write": true, 00:16:09.786 "unmap": true, 00:16:09.786 "write_zeroes": true, 00:16:09.786 "flush": false, 00:16:09.786 "reset": true, 00:16:09.786 "compare": false, 00:16:09.786 "compare_and_write": false, 00:16:09.786 "abort": false, 00:16:09.786 "nvme_admin": false, 00:16:09.786 "nvme_io": false 00:16:09.786 }, 00:16:09.786 "driver_specific": { 00:16:09.786 "lvol": { 00:16:09.786 "lvol_store_uuid": "ce76b67e-461b-492a-bff4-4a36ad565359", 00:16:09.786 "base_bdev": "nvme0n1", 00:16:09.786 "thin_provision": true, 00:16:09.786 "snapshot": false, 00:16:09.786 "clone": false, 00:16:09.786 "esnap_clone": false 00:16:09.786 } 00:16:09.786 } 00:16:09.786 } 00:16:09.786 ]' 00:16:09.786 09:53:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:09.786 09:53:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:09.786 09:53:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:09.786 09:53:58 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:09.786 09:53:58 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:09.786 09:53:58 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:09.786 09:53:58 -- ftl/common.sh@41 -- # local base_size=5171 00:16:09.786 09:53:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:09.786 09:53:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:10.048 09:53:58 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:10.048 09:53:58 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:10.048 09:53:58 -- ftl/common.sh@48 -- # get_bdev_size ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.048 09:53:58 -- common/autotest_common.sh@1367 -- # local bdev_name=ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.048 09:53:58 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:10.048 09:53:58 -- common/autotest_common.sh@1369 -- # local bs 00:16:10.048 09:53:58 -- common/autotest_common.sh@1370 -- # local nb 00:16:10.048 09:53:58 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.307 09:53:59 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:10.307 { 00:16:10.308 "name": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:10.308 "aliases": [ 00:16:10.308 "lvs/nvme0n1p0" 00:16:10.308 ], 00:16:10.308 "product_name": "Logical Volume", 00:16:10.308 "block_size": 4096, 00:16:10.308 "num_blocks": 26476544, 00:16:10.308 "uuid": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:10.308 "assigned_rate_limits": { 00:16:10.308 "rw_ios_per_sec": 0, 00:16:10.308 "rw_mbytes_per_sec": 0, 00:16:10.308 "r_mbytes_per_sec": 0, 00:16:10.308 "w_mbytes_per_sec": 0 00:16:10.308 }, 00:16:10.308 "claimed": false, 00:16:10.308 "zoned": false, 00:16:10.308 "supported_io_types": { 00:16:10.308 "read": true, 00:16:10.308 "write": true, 00:16:10.308 "unmap": true, 00:16:10.308 "write_zeroes": true, 00:16:10.308 "flush": false, 00:16:10.308 "reset": true, 00:16:10.308 "compare": false, 00:16:10.308 "compare_and_write": false, 00:16:10.308 "abort": false, 00:16:10.308 "nvme_admin": false, 00:16:10.308 "nvme_io": false 00:16:10.308 }, 00:16:10.308 "driver_specific": { 00:16:10.308 "lvol": { 00:16:10.308 "lvol_store_uuid": "ce76b67e-461b-492a-bff4-4a36ad565359", 00:16:10.308 "base_bdev": "nvme0n1", 00:16:10.308 "thin_provision": true, 00:16:10.308 "snapshot": false, 00:16:10.308 "clone": false, 00:16:10.308 "esnap_clone": false 00:16:10.308 } 00:16:10.308 } 00:16:10.308 } 00:16:10.308 ]' 00:16:10.308 09:53:59 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:10.308 09:53:59 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:10.308 09:53:59 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:10.308 09:53:59 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:10.308 09:53:59 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:10.308 09:53:59 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:10.308 09:53:59 -- ftl/common.sh@48 -- # cache_size=5171 00:16:10.308 09:53:59 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:10.612 09:53:59 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:10.612 09:53:59 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:10.612 09:53:59 -- ftl/trim.sh@47 -- # get_bdev_size ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.612 09:53:59 -- common/autotest_common.sh@1367 -- # local bdev_name=ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.612 09:53:59 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:10.612 09:53:59 -- common/autotest_common.sh@1369 -- # local bs 00:16:10.612 09:53:59 -- common/autotest_common.sh@1370 -- # local nb 00:16:10.612 09:53:59 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba6eaec3-1cb3-428e-b46c-2398880ac7fa 00:16:10.612 09:53:59 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:10.612 { 00:16:10.612 "name": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:10.612 "aliases": [ 00:16:10.612 "lvs/nvme0n1p0" 00:16:10.612 ], 00:16:10.612 "product_name": "Logical Volume", 00:16:10.612 "block_size": 4096, 00:16:10.612 "num_blocks": 26476544, 00:16:10.612 "uuid": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:10.612 "assigned_rate_limits": { 00:16:10.612 "rw_ios_per_sec": 0, 00:16:10.612 "rw_mbytes_per_sec": 0, 00:16:10.612 "r_mbytes_per_sec": 0, 00:16:10.612 "w_mbytes_per_sec": 0 00:16:10.612 }, 00:16:10.612 "claimed": false, 00:16:10.612 "zoned": false, 00:16:10.612 "supported_io_types": { 00:16:10.612 "read": true, 00:16:10.612 "write": true, 00:16:10.612 "unmap": true, 00:16:10.612 "write_zeroes": true, 00:16:10.612 "flush": false, 00:16:10.612 "reset": true, 00:16:10.612 "compare": false, 00:16:10.612 "compare_and_write": false, 00:16:10.612 "abort": false, 00:16:10.612 "nvme_admin": false, 00:16:10.612 "nvme_io": false 00:16:10.612 }, 00:16:10.612 "driver_specific": { 00:16:10.612 "lvol": { 00:16:10.612 "lvol_store_uuid": "ce76b67e-461b-492a-bff4-4a36ad565359", 00:16:10.612 "base_bdev": "nvme0n1", 00:16:10.612 "thin_provision": true, 00:16:10.612 "snapshot": false, 00:16:10.612 "clone": false, 00:16:10.613 "esnap_clone": false 00:16:10.613 } 00:16:10.613 } 00:16:10.613 } 00:16:10.613 ]' 00:16:10.613 09:53:59 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:10.613 09:53:59 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:10.613 09:53:59 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:10.893 09:53:59 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:10.893 09:53:59 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:10.893 09:53:59 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:10.893 09:53:59 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:10.893 09:53:59 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ba6eaec3-1cb3-428e-b46c-2398880ac7fa -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:10.893 [2024-12-15 09:53:59.781550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.781585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:10.893 [2024-12-15 09:53:59.781599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:10.893 [2024-12-15 09:53:59.781605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.783820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.783847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:10.893 [2024-12-15 09:53:59.783856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:16:10.893 [2024-12-15 09:53:59.783862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.783944] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:10.893 [2024-12-15 09:53:59.784512] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:10.893 [2024-12-15 09:53:59.784533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.784540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:10.893 [2024-12-15 09:53:59.784548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:16:10.893 [2024-12-15 09:53:59.784554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.784756] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b04f4c43-114b-4db2-b39a-d0632519f454 00:16:10.893 [2024-12-15 09:53:59.785760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.785786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:10.893 [2024-12-15 09:53:59.785794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:10.893 [2024-12-15 09:53:59.785801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.791019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.791041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:10.893 [2024-12-15 09:53:59.791048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:16:10.893 [2024-12-15 09:53:59.791055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.791161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.791171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:10.893 [2024-12-15 09:53:59.791177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:10.893 [2024-12-15 09:53:59.791187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.791228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.791236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:10.893 [2024-12-15 09:53:59.791242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:10.893 [2024-12-15 09:53:59.791249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.791291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:10.893 [2024-12-15 09:53:59.794262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.794283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:10.893 [2024-12-15 09:53:59.794292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:16:10.893 [2024-12-15 09:53:59.794298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.794363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.794370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:10.893 [2024-12-15 09:53:59.794378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:10.893 [2024-12-15 09:53:59.794383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.794412] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:10.893 [2024-12-15 09:53:59.794496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:10.893 [2024-12-15 09:53:59.794511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:10.893 [2024-12-15 09:53:59.794519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:10.893 [2024-12-15 09:53:59.794529] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794535] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794545] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:10.893 [2024-12-15 09:53:59.794550] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:10.893 [2024-12-15 09:53:59.794558] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:10.893 [2024-12-15 09:53:59.794563] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:10.893 [2024-12-15 09:53:59.794570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.794576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:10.893 [2024-12-15 09:53:59.794583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:16:10.893 [2024-12-15 09:53:59.794588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.794651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.893 [2024-12-15 09:53:59.794658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:10.893 [2024-12-15 09:53:59.794667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:10.893 [2024-12-15 09:53:59.794672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.893 [2024-12-15 09:53:59.794756] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:10.893 [2024-12-15 09:53:59.794763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:10.893 [2024-12-15 09:53:59.794770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:10.893 [2024-12-15 09:53:59.794788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:10.893 [2024-12-15 09:53:59.794805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:10.893 [2024-12-15 09:53:59.794816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:10.893 [2024-12-15 09:53:59.794821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:10.893 [2024-12-15 09:53:59.794827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:10.893 [2024-12-15 09:53:59.794832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:10.893 [2024-12-15 09:53:59.794840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:10.893 [2024-12-15 09:53:59.794845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:10.893 [2024-12-15 09:53:59.794882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:10.893 [2024-12-15 09:53:59.794888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:10.893 [2024-12-15 09:53:59.794900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:10.893 [2024-12-15 09:53:59.794905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:10.893 [2024-12-15 09:53:59.794917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:10.893 [2024-12-15 09:53:59.794934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:10.893 [2024-12-15 09:53:59.794950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:10.893 [2024-12-15 09:53:59.794969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:10.893 [2024-12-15 09:53:59.794980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:10.893 [2024-12-15 09:53:59.794985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:10.893 [2024-12-15 09:53:59.794991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:10.893 [2024-12-15 09:53:59.794996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:10.893 [2024-12-15 09:53:59.795002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:10.893 [2024-12-15 09:53:59.795006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:10.894 [2024-12-15 09:53:59.795013] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:10.894 [2024-12-15 09:53:59.795019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:10.894 [2024-12-15 09:53:59.795025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:10.894 [2024-12-15 09:53:59.795031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:10.894 [2024-12-15 09:53:59.795040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:10.894 [2024-12-15 09:53:59.795045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:10.894 [2024-12-15 09:53:59.795051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:10.894 [2024-12-15 09:53:59.795056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:10.894 [2024-12-15 09:53:59.795064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:10.894 [2024-12-15 09:53:59.795069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:10.894 [2024-12-15 09:53:59.795076] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:10.894 [2024-12-15 09:53:59.795083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:10.894 [2024-12-15 09:53:59.795091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:10.894 [2024-12-15 09:53:59.795096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:10.894 [2024-12-15 09:53:59.795103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:10.894 [2024-12-15 09:53:59.795109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:10.894 [2024-12-15 09:53:59.795115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:10.894 [2024-12-15 09:53:59.795120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:10.894 [2024-12-15 09:53:59.795127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:10.894 [2024-12-15 09:53:59.795133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:10.894 [2024-12-15 09:53:59.795140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:10.894 [2024-12-15 09:53:59.795145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:10.894 [2024-12-15 09:53:59.795151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:10.894 [2024-12-15 09:53:59.795157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:10.894 [2024-12-15 09:53:59.795165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:10.894 [2024-12-15 09:53:59.795171] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:10.894 [2024-12-15 09:53:59.795178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:10.894 [2024-12-15 09:53:59.795185] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:10.894 [2024-12-15 09:53:59.795191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:10.894 [2024-12-15 09:53:59.795197] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:10.894 [2024-12-15 09:53:59.795203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:10.894 [2024-12-15 09:53:59.795209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.795215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:10.894 [2024-12-15 09:53:59.795220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:16:10.894 [2024-12-15 09:53:59.795227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.807586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.807615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.894 [2024-12-15 09:53:59.807623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.271 ms 00:16:10.894 [2024-12-15 09:53:59.807630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.807727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.807737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:10.894 [2024-12-15 09:53:59.807745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:10.894 [2024-12-15 09:53:59.807752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.833432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.833466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.894 [2024-12-15 09:53:59.833475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.657 ms 00:16:10.894 [2024-12-15 09:53:59.833483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.833539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.833548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.894 [2024-12-15 09:53:59.833555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:10.894 [2024-12-15 09:53:59.833565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.833875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.833894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.894 [2024-12-15 09:53:59.833901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:10.894 [2024-12-15 09:53:59.833908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.834001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.834010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.894 [2024-12-15 09:53:59.834016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:10.894 [2024-12-15 09:53:59.834023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.857472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.857505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.894 [2024-12-15 09:53:59.857516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.423 ms 00:16:10.894 [2024-12-15 09:53:59.857525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.894 [2024-12-15 09:53:59.869003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:10.894 [2024-12-15 09:53:59.881718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.894 [2024-12-15 09:53:59.881741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:10.894 [2024-12-15 09:53:59.881751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.078 ms 00:16:10.894 [2024-12-15 09:53:59.881757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-12-15 09:53:59.952243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.153 [2024-12-15 09:53:59.952282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:11.153 [2024-12-15 09:53:59.952294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.421 ms 00:16:11.153 [2024-12-15 09:53:59.952300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.153 [2024-12-15 09:53:59.952356] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:11.153 [2024-12-15 09:53:59.952366] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:13.052 [2024-12-15 09:54:02.016428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.052 [2024-12-15 09:54:02.016480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:13.052 [2024-12-15 09:54:02.016497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2064.061 ms 00:16:13.052 [2024-12-15 09:54:02.016506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.052 [2024-12-15 09:54:02.016728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.052 [2024-12-15 09:54:02.016743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:13.052 [2024-12-15 09:54:02.016755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:16:13.052 [2024-12-15 09:54:02.016762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.052 [2024-12-15 09:54:02.039809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.052 [2024-12-15 09:54:02.039835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:13.052 [2024-12-15 09:54:02.039847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.009 ms 00:16:13.052 [2024-12-15 09:54:02.039855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.052 [2024-12-15 09:54:02.062605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.052 [2024-12-15 09:54:02.062631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:13.052 [2024-12-15 09:54:02.062646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.687 ms 00:16:13.052 [2024-12-15 09:54:02.062654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.052 [2024-12-15 09:54:02.062987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.052 [2024-12-15 09:54:02.063002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:13.052 [2024-12-15 09:54:02.063013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:13.052 [2024-12-15 09:54:02.063021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.122599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.122625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:13.310 [2024-12-15 09:54:02.122638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.541 ms 00:16:13.310 [2024-12-15 09:54:02.122646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.146511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.146540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:13.310 [2024-12-15 09:54:02.146553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.794 ms 00:16:13.310 [2024-12-15 09:54:02.146560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.150447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.150476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:13.310 [2024-12-15 09:54:02.150489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.818 ms 00:16:13.310 [2024-12-15 09:54:02.150497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.173630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.173665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:13.310 [2024-12-15 09:54:02.173677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.084 ms 00:16:13.310 [2024-12-15 09:54:02.173683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.173744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.173754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:13.310 [2024-12-15 09:54:02.173764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:13.310 [2024-12-15 09:54:02.173771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.173854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.310 [2024-12-15 09:54:02.173878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:13.310 [2024-12-15 09:54:02.173888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:13.310 [2024-12-15 09:54:02.173895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.310 [2024-12-15 09:54:02.174698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:13.310 [2024-12-15 09:54:02.177756] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2392.895 ms, result 0 00:16:13.310 [2024-12-15 09:54:02.178542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:13.310 { 00:16:13.310 "name": "ftl0", 00:16:13.310 "uuid": "b04f4c43-114b-4db2-b39a-d0632519f454" 00:16:13.310 } 00:16:13.310 09:54:02 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:13.310 09:54:02 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:13.310 09:54:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.310 09:54:02 -- common/autotest_common.sh@899 -- # local i 00:16:13.310 09:54:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.310 09:54:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.310 09:54:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:13.569 09:54:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:13.569 [ 00:16:13.569 { 00:16:13.569 "name": "ftl0", 00:16:13.569 "aliases": [ 00:16:13.569 "b04f4c43-114b-4db2-b39a-d0632519f454" 00:16:13.569 ], 00:16:13.569 "product_name": "FTL disk", 00:16:13.569 "block_size": 4096, 00:16:13.569 "num_blocks": 23592960, 00:16:13.569 "uuid": "b04f4c43-114b-4db2-b39a-d0632519f454", 00:16:13.569 "assigned_rate_limits": { 00:16:13.569 "rw_ios_per_sec": 0, 00:16:13.569 "rw_mbytes_per_sec": 0, 00:16:13.569 "r_mbytes_per_sec": 0, 00:16:13.569 "w_mbytes_per_sec": 0 00:16:13.569 }, 00:16:13.569 "claimed": false, 00:16:13.569 "zoned": false, 00:16:13.569 "supported_io_types": { 00:16:13.569 "read": true, 00:16:13.569 "write": true, 00:16:13.569 "unmap": true, 00:16:13.569 "write_zeroes": true, 00:16:13.569 "flush": true, 00:16:13.569 "reset": false, 00:16:13.569 "compare": false, 00:16:13.569 "compare_and_write": false, 00:16:13.569 "abort": false, 00:16:13.569 "nvme_admin": false, 00:16:13.569 "nvme_io": false 00:16:13.569 }, 00:16:13.569 "driver_specific": { 00:16:13.569 "ftl": { 00:16:13.569 "base_bdev": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:13.569 "cache": "nvc0n1p0" 00:16:13.569 } 00:16:13.569 } 00:16:13.569 } 00:16:13.569 ] 00:16:13.569 09:54:02 -- common/autotest_common.sh@905 -- # return 0 00:16:13.569 09:54:02 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:13.569 09:54:02 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:13.827 09:54:02 -- ftl/trim.sh@56 -- # echo ']}' 00:16:13.827 09:54:02 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:14.086 09:54:02 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:14.086 { 00:16:14.086 "name": "ftl0", 00:16:14.086 "aliases": [ 00:16:14.086 "b04f4c43-114b-4db2-b39a-d0632519f454" 00:16:14.086 ], 00:16:14.086 "product_name": "FTL disk", 00:16:14.086 "block_size": 4096, 00:16:14.086 "num_blocks": 23592960, 00:16:14.086 "uuid": "b04f4c43-114b-4db2-b39a-d0632519f454", 00:16:14.086 "assigned_rate_limits": { 00:16:14.086 "rw_ios_per_sec": 0, 00:16:14.086 "rw_mbytes_per_sec": 0, 00:16:14.086 "r_mbytes_per_sec": 0, 00:16:14.086 "w_mbytes_per_sec": 0 00:16:14.086 }, 00:16:14.086 "claimed": false, 00:16:14.086 "zoned": false, 00:16:14.086 "supported_io_types": { 00:16:14.086 "read": true, 00:16:14.086 "write": true, 00:16:14.086 "unmap": true, 00:16:14.086 "write_zeroes": true, 00:16:14.086 "flush": true, 00:16:14.086 "reset": false, 00:16:14.086 "compare": false, 00:16:14.086 "compare_and_write": false, 00:16:14.086 "abort": false, 00:16:14.086 "nvme_admin": false, 00:16:14.086 "nvme_io": false 00:16:14.086 }, 00:16:14.086 "driver_specific": { 00:16:14.086 "ftl": { 00:16:14.086 "base_bdev": "ba6eaec3-1cb3-428e-b46c-2398880ac7fa", 00:16:14.086 "cache": "nvc0n1p0" 00:16:14.086 } 00:16:14.086 } 00:16:14.086 } 00:16:14.086 ]' 00:16:14.086 09:54:02 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:14.086 09:54:02 -- ftl/trim.sh@60 -- # nb=23592960 00:16:14.086 09:54:02 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:14.346 [2024-12-15 09:54:03.138413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.138447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.346 [2024-12-15 09:54:03.138457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.346 [2024-12-15 09:54:03.138464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.138498] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:14.346 [2024-12-15 09:54:03.140536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.140557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.346 [2024-12-15 09:54:03.140569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:16:14.346 [2024-12-15 09:54:03.140576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.141104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.141119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.346 [2024-12-15 09:54:03.141130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:16:14.346 [2024-12-15 09:54:03.141136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.143921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.143935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.346 [2024-12-15 09:54:03.143946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:16:14.346 [2024-12-15 09:54:03.143953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.149205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.149224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:14.346 [2024-12-15 09:54:03.149232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:16:14.346 [2024-12-15 09:54:03.149238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.167938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.167958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.346 [2024-12-15 09:54:03.167968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.608 ms 00:16:14.346 [2024-12-15 09:54:03.167974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.180440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.180462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.346 [2024-12-15 09:54:03.180472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.406 ms 00:16:14.346 [2024-12-15 09:54:03.180479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.180684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.180697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.346 [2024-12-15 09:54:03.180710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:14.346 [2024-12-15 09:54:03.180716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.198768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.198789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:14.346 [2024-12-15 09:54:03.198798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.021 ms 00:16:14.346 [2024-12-15 09:54:03.198804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.216247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.216273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:14.346 [2024-12-15 09:54:03.216282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.391 ms 00:16:14.346 [2024-12-15 09:54:03.216288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.233555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.233577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.346 [2024-12-15 09:54:03.233586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.213 ms 00:16:14.346 [2024-12-15 09:54:03.233591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.251024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.346 [2024-12-15 09:54:03.251045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.346 [2024-12-15 09:54:03.251056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.329 ms 00:16:14.346 [2024-12-15 09:54:03.251061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.346 [2024-12-15 09:54:03.251110] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.346 [2024-12-15 09:54:03.251122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.346 [2024-12-15 09:54:03.251467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.347 [2024-12-15 09:54:03.251808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.347 [2024-12-15 09:54:03.251815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:16:14.347 [2024-12-15 09:54:03.251821] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.347 [2024-12-15 09:54:03.251828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.347 [2024-12-15 09:54:03.251833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.347 [2024-12-15 09:54:03.251840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.347 [2024-12-15 09:54:03.251845] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.347 [2024-12-15 09:54:03.251852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.347 [2024-12-15 09:54:03.251858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.347 [2024-12-15 09:54:03.251865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.347 [2024-12-15 09:54:03.251870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.347 [2024-12-15 09:54:03.251877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.347 [2024-12-15 09:54:03.251884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.347 [2024-12-15 09:54:03.251892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:16:14.347 [2024-12-15 09:54:03.251897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.261523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.347 [2024-12-15 09:54:03.261543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.347 [2024-12-15 09:54:03.261552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:16:14.347 [2024-12-15 09:54:03.261558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.261741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.347 [2024-12-15 09:54:03.261752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.347 [2024-12-15 09:54:03.261759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:14.347 [2024-12-15 09:54:03.261765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.296842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.347 [2024-12-15 09:54:03.296866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.347 [2024-12-15 09:54:03.296877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.347 [2024-12-15 09:54:03.296884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.296969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.347 [2024-12-15 09:54:03.296977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.347 [2024-12-15 09:54:03.296984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.347 [2024-12-15 09:54:03.296990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.297041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.347 [2024-12-15 09:54:03.297048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.347 [2024-12-15 09:54:03.297056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.347 [2024-12-15 09:54:03.297061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.347 [2024-12-15 09:54:03.297092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.347 [2024-12-15 09:54:03.297100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.347 [2024-12-15 09:54:03.297107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.347 [2024-12-15 09:54:03.297113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.362587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.362616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.606 [2024-12-15 09:54:03.362628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.362634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.606 [2024-12-15 09:54:03.385353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.606 [2024-12-15 09:54:03.385431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.606 [2024-12-15 09:54:03.385507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.606 [2024-12-15 09:54:03.385630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.606 [2024-12-15 09:54:03.385704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.606 [2024-12-15 09:54:03.385768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.385827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.606 [2024-12-15 09:54:03.385834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.606 [2024-12-15 09:54:03.385843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.606 [2024-12-15 09:54:03.385849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.606 [2024-12-15 09:54:03.386011] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 247.586 ms, result 0 00:16:14.606 true 00:16:14.606 09:54:03 -- ftl/trim.sh@63 -- # killprocess 71880 00:16:14.606 09:54:03 -- common/autotest_common.sh@936 -- # '[' -z 71880 ']' 00:16:14.606 09:54:03 -- common/autotest_common.sh@940 -- # kill -0 71880 00:16:14.606 09:54:03 -- common/autotest_common.sh@941 -- # uname 00:16:14.606 09:54:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.606 09:54:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71880 00:16:14.606 killing process with pid 71880 00:16:14.606 09:54:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:14.606 09:54:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:14.606 09:54:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71880' 00:16:14.606 09:54:03 -- common/autotest_common.sh@955 -- # kill 71880 00:16:14.606 09:54:03 -- common/autotest_common.sh@960 -- # wait 71880 00:16:21.187 09:54:09 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:21.448 65536+0 records in 00:16:21.448 65536+0 records out 00:16:21.448 268435456 bytes (268 MB, 256 MiB) copied, 1.11042 s, 242 MB/s 00:16:21.448 09:54:10 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:21.709 [2024-12-15 09:54:10.480040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.709 [2024-12-15 09:54:10.480173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72097 ] 00:16:21.709 [2024-12-15 09:54:10.635951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.969 [2024-12-15 09:54:10.847480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.230 [2024-12-15 09:54:11.114009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:22.230 [2024-12-15 09:54:11.114070] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:22.493 [2024-12-15 09:54:11.265327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.265370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:22.493 [2024-12-15 09:54:11.265383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:22.493 [2024-12-15 09:54:11.265391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.268086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.268126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.493 [2024-12-15 09:54:11.268136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:16:22.493 [2024-12-15 09:54:11.268144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.268216] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:22.493 [2024-12-15 09:54:11.269294] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:22.493 [2024-12-15 09:54:11.269336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.269346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.493 [2024-12-15 09:54:11.269356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:16:22.493 [2024-12-15 09:54:11.269364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.270536] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:22.493 [2024-12-15 09:54:11.283106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.283139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:22.493 [2024-12-15 09:54:11.283150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.572 ms 00:16:22.493 [2024-12-15 09:54:11.283159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.283240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.283250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:22.493 [2024-12-15 09:54:11.283278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:22.493 [2024-12-15 09:54:11.283286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.288404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.288433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.493 [2024-12-15 09:54:11.288442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:16:22.493 [2024-12-15 09:54:11.288453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.288553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.288564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.493 [2024-12-15 09:54:11.288572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:22.493 [2024-12-15 09:54:11.288580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.288604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.288612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:22.493 [2024-12-15 09:54:11.288619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:22.493 [2024-12-15 09:54:11.288626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.288676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:22.493 [2024-12-15 09:54:11.292132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.292160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.493 [2024-12-15 09:54:11.292170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:16:22.493 [2024-12-15 09:54:11.292180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.292217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.292226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:22.493 [2024-12-15 09:54:11.292234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:22.493 [2024-12-15 09:54:11.292241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.292283] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:22.493 [2024-12-15 09:54:11.292303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:22.493 [2024-12-15 09:54:11.292334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:22.493 [2024-12-15 09:54:11.292352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:22.493 [2024-12-15 09:54:11.292427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:22.493 [2024-12-15 09:54:11.292438] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:22.493 [2024-12-15 09:54:11.292449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:22.493 [2024-12-15 09:54:11.292459] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:22.493 [2024-12-15 09:54:11.292468] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:22.493 [2024-12-15 09:54:11.292476] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:22.493 [2024-12-15 09:54:11.292484] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:22.493 [2024-12-15 09:54:11.292492] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:22.493 [2024-12-15 09:54:11.292501] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:22.493 [2024-12-15 09:54:11.292508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.292515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:22.493 [2024-12-15 09:54:11.292523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:22.493 [2024-12-15 09:54:11.292530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.292606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.493 [2024-12-15 09:54:11.292616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:22.493 [2024-12-15 09:54:11.292624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:22.493 [2024-12-15 09:54:11.292642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.493 [2024-12-15 09:54:11.292717] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:22.493 [2024-12-15 09:54:11.292728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:22.493 [2024-12-15 09:54:11.292736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.493 [2024-12-15 09:54:11.292743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.493 [2024-12-15 09:54:11.292751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:22.493 [2024-12-15 09:54:11.292758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:22.493 [2024-12-15 09:54:11.292766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:22.493 [2024-12-15 09:54:11.292773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:22.493 [2024-12-15 09:54:11.292780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:22.493 [2024-12-15 09:54:11.292786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.493 [2024-12-15 09:54:11.292793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:22.493 [2024-12-15 09:54:11.292799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:22.493 [2024-12-15 09:54:11.292805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:22.493 [2024-12-15 09:54:11.292811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:22.493 [2024-12-15 09:54:11.292827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:22.493 [2024-12-15 09:54:11.292833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.493 [2024-12-15 09:54:11.292840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:22.493 [2024-12-15 09:54:11.292850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:22.493 [2024-12-15 09:54:11.292856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.493 [2024-12-15 09:54:11.292862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:22.493 [2024-12-15 09:54:11.292869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:22.493 [2024-12-15 09:54:11.292876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:22.493 [2024-12-15 09:54:11.292883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:22.494 [2024-12-15 09:54:11.292889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:22.494 [2024-12-15 09:54:11.292895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.494 [2024-12-15 09:54:11.292901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:22.494 [2024-12-15 09:54:11.292908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:22.494 [2024-12-15 09:54:11.292914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.494 [2024-12-15 09:54:11.292920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:22.494 [2024-12-15 09:54:11.292927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:22.494 [2024-12-15 09:54:11.292933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.494 [2024-12-15 09:54:11.292939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:22.494 [2024-12-15 09:54:11.292945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:22.494 [2024-12-15 09:54:11.292952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:22.494 [2024-12-15 09:54:11.292958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:22.494 [2024-12-15 09:54:11.292965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:22.494 [2024-12-15 09:54:11.292971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.494 [2024-12-15 09:54:11.292978] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:22.494 [2024-12-15 09:54:11.292984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:22.494 [2024-12-15 09:54:11.292991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:22.494 [2024-12-15 09:54:11.292996] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:22.494 [2024-12-15 09:54:11.293004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:22.494 [2024-12-15 09:54:11.293011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:22.494 [2024-12-15 09:54:11.293020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:22.494 [2024-12-15 09:54:11.293027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:22.494 [2024-12-15 09:54:11.293034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:22.494 [2024-12-15 09:54:11.293040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:22.494 [2024-12-15 09:54:11.293047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:22.494 [2024-12-15 09:54:11.293053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:22.494 [2024-12-15 09:54:11.293060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:22.494 [2024-12-15 09:54:11.293068] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:22.494 [2024-12-15 09:54:11.293077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.494 [2024-12-15 09:54:11.293086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:22.494 [2024-12-15 09:54:11.293093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:22.494 [2024-12-15 09:54:11.293100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:22.494 [2024-12-15 09:54:11.293107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:22.494 [2024-12-15 09:54:11.293114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:22.494 [2024-12-15 09:54:11.293120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:22.494 [2024-12-15 09:54:11.293127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:22.494 [2024-12-15 09:54:11.293135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:22.494 [2024-12-15 09:54:11.293142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:22.494 [2024-12-15 09:54:11.293149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:22.494 [2024-12-15 09:54:11.293155] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:22.494 [2024-12-15 09:54:11.293162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:22.494 [2024-12-15 09:54:11.293170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:22.494 [2024-12-15 09:54:11.293176] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:22.494 [2024-12-15 09:54:11.293188] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:22.494 [2024-12-15 09:54:11.293195] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:22.494 [2024-12-15 09:54:11.293202] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:22.494 [2024-12-15 09:54:11.293209] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:22.494 [2024-12-15 09:54:11.293217] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:22.494 [2024-12-15 09:54:11.293223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.293230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:22.494 [2024-12-15 09:54:11.293238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:22.494 [2024-12-15 09:54:11.293245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.308212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.308245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.494 [2024-12-15 09:54:11.308281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.909 ms 00:16:22.494 [2024-12-15 09:54:11.308290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.308404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.308415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:22.494 [2024-12-15 09:54:11.308423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:22.494 [2024-12-15 09:54:11.308431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.347546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.347582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.494 [2024-12-15 09:54:11.347594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.095 ms 00:16:22.494 [2024-12-15 09:54:11.347602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.347669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.347679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.494 [2024-12-15 09:54:11.347691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:22.494 [2024-12-15 09:54:11.347698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.348016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.348032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.494 [2024-12-15 09:54:11.348041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:16:22.494 [2024-12-15 09:54:11.348049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.348166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.348176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.494 [2024-12-15 09:54:11.348185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:22.494 [2024-12-15 09:54:11.348192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.362386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.362416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.494 [2024-12-15 09:54:11.362426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.172 ms 00:16:22.494 [2024-12-15 09:54:11.362436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.375490] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:22.494 [2024-12-15 09:54:11.375529] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:22.494 [2024-12-15 09:54:11.375540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.375548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:22.494 [2024-12-15 09:54:11.375557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.010 ms 00:16:22.494 [2024-12-15 09:54:11.375564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.400031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.400065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:22.494 [2024-12-15 09:54:11.400080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.398 ms 00:16:22.494 [2024-12-15 09:54:11.400088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.412186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.412331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:22.494 [2024-12-15 09:54:11.412347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.031 ms 00:16:22.494 [2024-12-15 09:54:11.412362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.424698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.494 [2024-12-15 09:54:11.424735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:22.494 [2024-12-15 09:54:11.424748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.012 ms 00:16:22.494 [2024-12-15 09:54:11.424756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.494 [2024-12-15 09:54:11.425120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.495 [2024-12-15 09:54:11.425133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:22.495 [2024-12-15 09:54:11.425142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:16:22.495 [2024-12-15 09:54:11.425149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.495 [2024-12-15 09:54:11.483440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.495 [2024-12-15 09:54:11.483596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:22.495 [2024-12-15 09:54:11.483615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.269 ms 00:16:22.495 [2024-12-15 09:54:11.483623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.495 [2024-12-15 09:54:11.494665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:22.756 [2024-12-15 09:54:11.509150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.756 [2024-12-15 09:54:11.509186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:22.756 [2024-12-15 09:54:11.509199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.979 ms 00:16:22.756 [2024-12-15 09:54:11.509207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.756 [2024-12-15 09:54:11.509299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.756 [2024-12-15 09:54:11.509310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:22.756 [2024-12-15 09:54:11.509319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:22.757 [2024-12-15 09:54:11.509329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.509378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.509400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:22.757 [2024-12-15 09:54:11.509408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:22.757 [2024-12-15 09:54:11.509415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.510576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.510607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:22.757 [2024-12-15 09:54:11.510616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:16:22.757 [2024-12-15 09:54:11.510623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.510653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.510663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:22.757 [2024-12-15 09:54:11.510673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:22.757 [2024-12-15 09:54:11.510681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.510711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:22.757 [2024-12-15 09:54:11.510721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.510728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:22.757 [2024-12-15 09:54:11.510736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:22.757 [2024-12-15 09:54:11.510743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.535096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.535145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:22.757 [2024-12-15 09:54:11.535156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.330 ms 00:16:22.757 [2024-12-15 09:54:11.535164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.535249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.757 [2024-12-15 09:54:11.535278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:22.757 [2024-12-15 09:54:11.535287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:22.757 [2024-12-15 09:54:11.535294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.757 [2024-12-15 09:54:11.536082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:22.757 [2024-12-15 09:54:11.539287] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.481 ms, result 0 00:16:22.757 [2024-12-15 09:54:11.540163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:22.757 [2024-12-15 09:54:11.553532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:23.695  [2024-12-15T09:54:13.651Z] Copying: 20/256 [MB] (20 MBps) [2024-12-15T09:54:14.597Z] Copying: 44/256 [MB] (23 MBps) [2024-12-15T09:54:15.980Z] Copying: 62/256 [MB] (18 MBps) [2024-12-15T09:54:16.923Z] Copying: 86/256 [MB] (23 MBps) [2024-12-15T09:54:17.866Z] Copying: 107/256 [MB] (21 MBps) [2024-12-15T09:54:18.808Z] Copying: 130/256 [MB] (22 MBps) [2024-12-15T09:54:19.753Z] Copying: 150/256 [MB] (20 MBps) [2024-12-15T09:54:20.697Z] Copying: 166/256 [MB] (16 MBps) [2024-12-15T09:54:21.630Z] Copying: 184/256 [MB] (18 MBps) [2024-12-15T09:54:22.197Z] Copying: 230/256 [MB] (45 MBps) [2024-12-15T09:54:22.197Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-15 09:54:22.060389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.181 [2024-12-15 09:54:22.067774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.067888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:33.181 [2024-12-15 09:54:22.067911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.181 [2024-12-15 09:54:22.067917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.067938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:33.181 [2024-12-15 09:54:22.070095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.070116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:33.181 [2024-12-15 09:54:22.070125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:16:33.181 [2024-12-15 09:54:22.070131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.072178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.072201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:33.181 [2024-12-15 09:54:22.072208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:16:33.181 [2024-12-15 09:54:22.072214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.078000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.078022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:33.181 [2024-12-15 09:54:22.078029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:16:33.181 [2024-12-15 09:54:22.078035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.083439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.083459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:33.181 [2024-12-15 09:54:22.083466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.343 ms 00:16:33.181 [2024-12-15 09:54:22.083472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.101543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.101564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:33.181 [2024-12-15 09:54:22.101572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.021 ms 00:16:33.181 [2024-12-15 09:54:22.101578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.181 [2024-12-15 09:54:22.113280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.181 [2024-12-15 09:54:22.113390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:33.181 [2024-12-15 09:54:22.113404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:16:33.182 [2024-12-15 09:54:22.113409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.113510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.182 [2024-12-15 09:54:22.113518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:33.182 [2024-12-15 09:54:22.113525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:33.182 [2024-12-15 09:54:22.113530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.131712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.182 [2024-12-15 09:54:22.131733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:33.182 [2024-12-15 09:54:22.131742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.168 ms 00:16:33.182 [2024-12-15 09:54:22.131747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.149657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.182 [2024-12-15 09:54:22.149678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:33.182 [2024-12-15 09:54:22.149686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.863 ms 00:16:33.182 [2024-12-15 09:54:22.149691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.167055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.182 [2024-12-15 09:54:22.167073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:33.182 [2024-12-15 09:54:22.167081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:16:33.182 [2024-12-15 09:54:22.167086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.184302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.182 [2024-12-15 09:54:22.184321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:33.182 [2024-12-15 09:54:22.184328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.160 ms 00:16:33.182 [2024-12-15 09:54:22.184334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.182 [2024-12-15 09:54:22.184368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:33.182 [2024-12-15 09:54:22.184378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:33.182 [2024-12-15 09:54:22.184802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:33.183 [2024-12-15 09:54:22.184964] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:33.183 [2024-12-15 09:54:22.184970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:16:33.183 [2024-12-15 09:54:22.184976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:33.183 [2024-12-15 09:54:22.184981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:33.183 [2024-12-15 09:54:22.184986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:33.183 [2024-12-15 09:54:22.184992] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:33.183 [2024-12-15 09:54:22.184998] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:33.183 [2024-12-15 09:54:22.185003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:33.183 [2024-12-15 09:54:22.185011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:33.183 [2024-12-15 09:54:22.185015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:33.183 [2024-12-15 09:54:22.185020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:33.183 [2024-12-15 09:54:22.185027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.183 [2024-12-15 09:54:22.185033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:33.183 [2024-12-15 09:54:22.185039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:16:33.183 [2024-12-15 09:54:22.185044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.183 [2024-12-15 09:54:22.194430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.444 [2024-12-15 09:54:22.194533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:33.444 [2024-12-15 09:54:22.194545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:16:33.444 [2024-12-15 09:54:22.194555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.194719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.444 [2024-12-15 09:54:22.194726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:33.444 [2024-12-15 09:54:22.194733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:33.444 [2024-12-15 09:54:22.194738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.224191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.224212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.444 [2024-12-15 09:54:22.224220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.224229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.224313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.224320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.444 [2024-12-15 09:54:22.224327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.224332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.224364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.224371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.444 [2024-12-15 09:54:22.224377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.224382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.224399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.224404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.444 [2024-12-15 09:54:22.224410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.224416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.281093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.281119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.444 [2024-12-15 09:54:22.281127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.281136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.444 [2024-12-15 09:54:22.303519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.444 [2024-12-15 09:54:22.303574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.444 [2024-12-15 09:54:22.303620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.444 [2024-12-15 09:54:22.303709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:33.444 [2024-12-15 09:54:22.303753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.444 [2024-12-15 09:54:22.303801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.444 [2024-12-15 09:54:22.303841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.444 [2024-12-15 09:54:22.303850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.444 [2024-12-15 09:54:22.303858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.444 [2024-12-15 09:54:22.303863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.445 [2024-12-15 09:54:22.303970] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 236.190 ms, result 0 00:16:34.383 00:16:34.383 00:16:34.383 09:54:23 -- ftl/trim.sh@72 -- # svcpid=72241 00:16:34.383 09:54:23 -- ftl/trim.sh@73 -- # waitforlisten 72241 00:16:34.383 09:54:23 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:34.383 09:54:23 -- common/autotest_common.sh@829 -- # '[' -z 72241 ']' 00:16:34.383 09:54:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.383 09:54:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.383 09:54:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.383 09:54:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.383 09:54:23 -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 [2024-12-15 09:54:23.404761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:34.642 [2024-12-15 09:54:23.404889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72241 ] 00:16:34.642 [2024-12-15 09:54:23.554989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.901 [2024-12-15 09:54:23.705952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.901 [2024-12-15 09:54:23.706284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.486 09:54:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.486 09:54:24 -- common/autotest_common.sh@862 -- # return 0 00:16:35.486 09:54:24 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:35.486 [2024-12-15 09:54:24.413301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:35.486 [2024-12-15 09:54:24.413345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:35.803 [2024-12-15 09:54:24.570356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.570514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:35.803 [2024-12-15 09:54:24.570532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.803 [2024-12-15 09:54:24.570539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.572590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.572628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.803 [2024-12-15 09:54:24.572638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:16:35.803 [2024-12-15 09:54:24.572644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.572704] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:35.803 [2024-12-15 09:54:24.573479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:35.803 [2024-12-15 09:54:24.573512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.573520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.803 [2024-12-15 09:54:24.573529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:16:35.803 [2024-12-15 09:54:24.573534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.574564] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:35.803 [2024-12-15 09:54:24.584327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.584459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:35.803 [2024-12-15 09:54:24.584473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.767 ms 00:16:35.803 [2024-12-15 09:54:24.584480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.584541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.584551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:35.803 [2024-12-15 09:54:24.584558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:35.803 [2024-12-15 09:54:24.584564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.588964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.588993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.803 [2024-12-15 09:54:24.589000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:16:35.803 [2024-12-15 09:54:24.589008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.589071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.589080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.803 [2024-12-15 09:54:24.589086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:35.803 [2024-12-15 09:54:24.589093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.589114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.589122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:35.803 [2024-12-15 09:54:24.589128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:35.803 [2024-12-15 09:54:24.589136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.589158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:35.803 [2024-12-15 09:54:24.591954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.592061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.803 [2024-12-15 09:54:24.592075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:16:35.803 [2024-12-15 09:54:24.592081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.592118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.592124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:35.803 [2024-12-15 09:54:24.592131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:35.803 [2024-12-15 09:54:24.592139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.592156] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:35.803 [2024-12-15 09:54:24.592169] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:35.803 [2024-12-15 09:54:24.592196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:35.803 [2024-12-15 09:54:24.592207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:35.803 [2024-12-15 09:54:24.592279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:35.803 [2024-12-15 09:54:24.592287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:35.803 [2024-12-15 09:54:24.592300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:35.803 [2024-12-15 09:54:24.592308] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:35.803 [2024-12-15 09:54:24.592315] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:35.803 [2024-12-15 09:54:24.592321] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:35.803 [2024-12-15 09:54:24.592328] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:35.803 [2024-12-15 09:54:24.592333] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:35.803 [2024-12-15 09:54:24.592341] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:35.803 [2024-12-15 09:54:24.592347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.592353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:35.803 [2024-12-15 09:54:24.592359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:35.803 [2024-12-15 09:54:24.592365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.592416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.803 [2024-12-15 09:54:24.592424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:35.803 [2024-12-15 09:54:24.592430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:35.803 [2024-12-15 09:54:24.592436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.803 [2024-12-15 09:54:24.592493] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:35.803 [2024-12-15 09:54:24.592502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:35.803 [2024-12-15 09:54:24.592509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.803 [2024-12-15 09:54:24.592516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.803 [2024-12-15 09:54:24.592522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:35.803 [2024-12-15 09:54:24.592528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:35.803 [2024-12-15 09:54:24.592535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:35.803 [2024-12-15 09:54:24.592543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:35.803 [2024-12-15 09:54:24.592549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:35.803 [2024-12-15 09:54:24.592556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.803 [2024-12-15 09:54:24.592563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:35.803 [2024-12-15 09:54:24.592569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:35.803 [2024-12-15 09:54:24.592575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.803 [2024-12-15 09:54:24.592582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:35.803 [2024-12-15 09:54:24.592587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:35.803 [2024-12-15 09:54:24.592593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:35.804 [2024-12-15 09:54:24.592603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:35.804 [2024-12-15 09:54:24.592624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:35.804 [2024-12-15 09:54:24.592636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:35.804 [2024-12-15 09:54:24.592642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:35.804 [2024-12-15 09:54:24.592655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:35.804 [2024-12-15 09:54:24.592676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:35.804 [2024-12-15 09:54:24.592693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:35.804 [2024-12-15 09:54:24.592711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:35.804 [2024-12-15 09:54:24.592728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.804 [2024-12-15 09:54:24.592739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:35.804 [2024-12-15 09:54:24.592744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:35.804 [2024-12-15 09:54:24.592752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.804 [2024-12-15 09:54:24.592756] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:35.804 [2024-12-15 09:54:24.592764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:35.804 [2024-12-15 09:54:24.592771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.804 [2024-12-15 09:54:24.592785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:35.804 [2024-12-15 09:54:24.592791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:35.804 [2024-12-15 09:54:24.592796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:35.804 [2024-12-15 09:54:24.592803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:35.804 [2024-12-15 09:54:24.592808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:35.804 [2024-12-15 09:54:24.592814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:35.804 [2024-12-15 09:54:24.592820] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:35.804 [2024-12-15 09:54:24.592829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.804 [2024-12-15 09:54:24.592835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:35.804 [2024-12-15 09:54:24.592841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:35.804 [2024-12-15 09:54:24.592847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:35.804 [2024-12-15 09:54:24.592855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:35.804 [2024-12-15 09:54:24.592861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:35.804 [2024-12-15 09:54:24.592868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:35.804 [2024-12-15 09:54:24.592874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:35.804 [2024-12-15 09:54:24.592880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:35.804 [2024-12-15 09:54:24.592885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:35.804 [2024-12-15 09:54:24.592892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:35.804 [2024-12-15 09:54:24.592897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:35.804 [2024-12-15 09:54:24.592904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:35.804 [2024-12-15 09:54:24.592910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:35.804 [2024-12-15 09:54:24.592916] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:35.804 [2024-12-15 09:54:24.592922] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.804 [2024-12-15 09:54:24.592929] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:35.804 [2024-12-15 09:54:24.592934] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:35.804 [2024-12-15 09:54:24.592941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:35.804 [2024-12-15 09:54:24.592947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:35.804 [2024-12-15 09:54:24.592956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.592961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:35.804 [2024-12-15 09:54:24.592968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:16:35.804 [2024-12-15 09:54:24.592974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.604873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.604899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.804 [2024-12-15 09:54:24.604910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.861 ms 00:16:35.804 [2024-12-15 09:54:24.604918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.605006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.605014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:35.804 [2024-12-15 09:54:24.605021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:35.804 [2024-12-15 09:54:24.605026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.629187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.629214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.804 [2024-12-15 09:54:24.629224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.143 ms 00:16:35.804 [2024-12-15 09:54:24.629231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.629291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.629301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.804 [2024-12-15 09:54:24.629309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:35.804 [2024-12-15 09:54:24.629316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.629599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.629617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.804 [2024-12-15 09:54:24.629626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:35.804 [2024-12-15 09:54:24.629633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.629725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.629738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.804 [2024-12-15 09:54:24.629747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:35.804 [2024-12-15 09:54:24.629753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.641686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.641709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.804 [2024-12-15 09:54:24.641720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.916 ms 00:16:35.804 [2024-12-15 09:54:24.641726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.652061] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:35.804 [2024-12-15 09:54:24.652088] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:35.804 [2024-12-15 09:54:24.652099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.652106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:35.804 [2024-12-15 09:54:24.652115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.296 ms 00:16:35.804 [2024-12-15 09:54:24.652120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.804 [2024-12-15 09:54:24.673055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.804 [2024-12-15 09:54:24.673083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:35.805 [2024-12-15 09:54:24.673093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.879 ms 00:16:35.805 [2024-12-15 09:54:24.673100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.682021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.682050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:35.805 [2024-12-15 09:54:24.682059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.868 ms 00:16:35.805 [2024-12-15 09:54:24.682064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.691191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.691215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:35.805 [2024-12-15 09:54:24.691225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.083 ms 00:16:35.805 [2024-12-15 09:54:24.691231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.691513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.691523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.805 [2024-12-15 09:54:24.691533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:35.805 [2024-12-15 09:54:24.691539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.737346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.737474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:35.805 [2024-12-15 09:54:24.737494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.788 ms 00:16:35.805 [2024-12-15 09:54:24.737500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.745472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:35.805 [2024-12-15 09:54:24.756898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.757021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:35.805 [2024-12-15 09:54:24.757034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.339 ms 00:16:35.805 [2024-12-15 09:54:24.757042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.757094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.757105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:35.805 [2024-12-15 09:54:24.757112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.805 [2024-12-15 09:54:24.757121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.757160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.757168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.805 [2024-12-15 09:54:24.757175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:35.805 [2024-12-15 09:54:24.757182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.758104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.758135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:35.805 [2024-12-15 09:54:24.758142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:16:35.805 [2024-12-15 09:54:24.758149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.758173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.758183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.805 [2024-12-15 09:54:24.758189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.805 [2024-12-15 09:54:24.758195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.758222] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:35.805 [2024-12-15 09:54:24.758231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.758237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:35.805 [2024-12-15 09:54:24.758244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:35.805 [2024-12-15 09:54:24.758249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.776392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.776418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.805 [2024-12-15 09:54:24.776428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.110 ms 00:16:35.805 [2024-12-15 09:54:24.776434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.776502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.805 [2024-12-15 09:54:24.776509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.805 [2024-12-15 09:54:24.776518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:35.805 [2024-12-15 09:54:24.776525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.805 [2024-12-15 09:54:24.777146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.805 [2024-12-15 09:54:24.779591] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 206.575 ms, result 0 00:16:35.805 [2024-12-15 09:54:24.781313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.805 Some configs were skipped because the RPC state that can call them passed over. 00:16:36.088 09:54:24 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:36.088 [2024-12-15 09:54:25.011213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.088 [2024-12-15 09:54:25.011343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:36.088 [2024-12-15 09:54:25.011411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.056 ms 00:16:36.088 [2024-12-15 09:54:25.011433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.088 [2024-12-15 09:54:25.011475] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.318 ms, result 0 00:16:36.088 true 00:16:36.088 09:54:25 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:36.352 [2024-12-15 09:54:25.217631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.352 [2024-12-15 09:54:25.217749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:36.352 [2024-12-15 09:54:25.217795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.170 ms 00:16:36.352 [2024-12-15 09:54:25.217812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.352 [2024-12-15 09:54:25.217854] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.392 ms, result 0 00:16:36.352 true 00:16:36.352 09:54:25 -- ftl/trim.sh@81 -- # killprocess 72241 00:16:36.352 09:54:25 -- common/autotest_common.sh@936 -- # '[' -z 72241 ']' 00:16:36.352 09:54:25 -- common/autotest_common.sh@940 -- # kill -0 72241 00:16:36.352 09:54:25 -- common/autotest_common.sh@941 -- # uname 00:16:36.352 09:54:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.352 09:54:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72241 00:16:36.352 killing process with pid 72241 00:16:36.352 09:54:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:36.352 09:54:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:36.352 09:54:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72241' 00:16:36.352 09:54:25 -- common/autotest_common.sh@955 -- # kill 72241 00:16:36.352 09:54:25 -- common/autotest_common.sh@960 -- # wait 72241 00:16:36.921 [2024-12-15 09:54:25.793528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.793567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:36.921 [2024-12-15 09:54:25.793576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:36.921 [2024-12-15 09:54:25.793583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.793603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:36.921 [2024-12-15 09:54:25.795688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.795711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:36.921 [2024-12-15 09:54:25.795723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:16:36.921 [2024-12-15 09:54:25.795732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.795973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.795982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:36.921 [2024-12-15 09:54:25.795990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:36.921 [2024-12-15 09:54:25.795995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.799417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.799440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:36.921 [2024-12-15 09:54:25.799451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.405 ms 00:16:36.921 [2024-12-15 09:54:25.799456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.804754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.804783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:36.921 [2024-12-15 09:54:25.804793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.264 ms 00:16:36.921 [2024-12-15 09:54:25.804798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.812931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.812956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:36.921 [2024-12-15 09:54:25.812966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.075 ms 00:16:36.921 [2024-12-15 09:54:25.812971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.819689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.819716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:36.921 [2024-12-15 09:54:25.819726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:16:36.921 [2024-12-15 09:54:25.819732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.819842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.819850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:36.921 [2024-12-15 09:54:25.819858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:36.921 [2024-12-15 09:54:25.819863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.828383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.828407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:36.921 [2024-12-15 09:54:25.828415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.501 ms 00:16:36.921 [2024-12-15 09:54:25.828420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.836724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.836749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:36.921 [2024-12-15 09:54:25.836760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.272 ms 00:16:36.921 [2024-12-15 09:54:25.836766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.844402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.844426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:36.921 [2024-12-15 09:54:25.844435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.596 ms 00:16:36.921 [2024-12-15 09:54:25.844440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.852047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.921 [2024-12-15 09:54:25.852071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:36.921 [2024-12-15 09:54:25.852079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:16:36.921 [2024-12-15 09:54:25.852084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.921 [2024-12-15 09:54:25.852113] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:36.921 [2024-12-15 09:54:25.852123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:36.921 [2024-12-15 09:54:25.852369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:36.922 [2024-12-15 09:54:25.852813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:36.922 [2024-12-15 09:54:25.852821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:16:36.922 [2024-12-15 09:54:25.852827] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:36.922 [2024-12-15 09:54:25.852834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:36.922 [2024-12-15 09:54:25.852839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:36.922 [2024-12-15 09:54:25.852846] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:36.922 [2024-12-15 09:54:25.852851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:36.922 [2024-12-15 09:54:25.852858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:36.922 [2024-12-15 09:54:25.852863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:36.922 [2024-12-15 09:54:25.852870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:36.922 [2024-12-15 09:54:25.852874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:36.922 [2024-12-15 09:54:25.852882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.922 [2024-12-15 09:54:25.852888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:36.922 [2024-12-15 09:54:25.852896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:16:36.922 [2024-12-15 09:54:25.852903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.922 [2024-12-15 09:54:25.862387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.922 [2024-12-15 09:54:25.862511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:36.922 [2024-12-15 09:54:25.862527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.465 ms 00:16:36.922 [2024-12-15 09:54:25.862533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.922 [2024-12-15 09:54:25.862699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.922 [2024-12-15 09:54:25.862708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:36.922 [2024-12-15 09:54:25.862717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:36.922 [2024-12-15 09:54:25.862723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.922 [2024-12-15 09:54:25.898091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:36.922 [2024-12-15 09:54:25.898204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:36.922 [2024-12-15 09:54:25.898218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:36.922 [2024-12-15 09:54:25.898224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.922 [2024-12-15 09:54:25.898308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:36.922 [2024-12-15 09:54:25.898316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:36.922 [2024-12-15 09:54:25.898325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:36.922 [2024-12-15 09:54:25.898330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.922 [2024-12-15 09:54:25.898364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:36.922 [2024-12-15 09:54:25.898372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:36.922 [2024-12-15 09:54:25.898380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:36.922 [2024-12-15 09:54:25.898386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.923 [2024-12-15 09:54:25.898402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:36.923 [2024-12-15 09:54:25.898408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:36.923 [2024-12-15 09:54:25.898414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:36.923 [2024-12-15 09:54:25.898421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.958895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.958926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.181 [2024-12-15 09:54:25.958936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.958943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.181 [2024-12-15 09:54:25.981153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.181 [2024-12-15 09:54:25.981219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.181 [2024-12-15 09:54:25.981285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.181 [2024-12-15 09:54:25.981383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.181 [2024-12-15 09:54:25.981429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.181 [2024-12-15 09:54:25.981483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.181 [2024-12-15 09:54:25.981523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.181 [2024-12-15 09:54:25.981530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.181 [2024-12-15 09:54:25.981537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.181 [2024-12-15 09:54:25.981544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.182 [2024-12-15 09:54:25.981650] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 188.106 ms, result 0 00:16:37.749 09:54:26 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:37.749 09:54:26 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:37.749 [2024-12-15 09:54:26.678431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.749 [2024-12-15 09:54:26.678548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72289 ] 00:16:38.007 [2024-12-15 09:54:26.827122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.007 [2024-12-15 09:54:26.968761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.266 [2024-12-15 09:54:27.174515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:38.266 [2024-12-15 09:54:27.174562] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:38.526 [2024-12-15 09:54:27.321729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.321875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:38.526 [2024-12-15 09:54:27.321891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:38.526 [2024-12-15 09:54:27.321897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.323939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.323971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:38.526 [2024-12-15 09:54:27.323979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:16:38.526 [2024-12-15 09:54:27.323985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.324039] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:38.526 [2024-12-15 09:54:27.324604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:38.526 [2024-12-15 09:54:27.324632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.324638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:38.526 [2024-12-15 09:54:27.324645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:16:38.526 [2024-12-15 09:54:27.324651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.325631] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:38.526 [2024-12-15 09:54:27.335700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.335821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:38.526 [2024-12-15 09:54:27.335836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.070 ms 00:16:38.526 [2024-12-15 09:54:27.335843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.335911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.335919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:38.526 [2024-12-15 09:54:27.335926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:38.526 [2024-12-15 09:54:27.335931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.340381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.340413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:38.526 [2024-12-15 09:54:27.340421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:16:38.526 [2024-12-15 09:54:27.340430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.340504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.340511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:38.526 [2024-12-15 09:54:27.340518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:38.526 [2024-12-15 09:54:27.340524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.340541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.340547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:38.526 [2024-12-15 09:54:27.340553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:38.526 [2024-12-15 09:54:27.340559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.340584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:38.526 [2024-12-15 09:54:27.343371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.343393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:38.526 [2024-12-15 09:54:27.343400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:16:38.526 [2024-12-15 09:54:27.343408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.343438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.343444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:38.526 [2024-12-15 09:54:27.343451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:38.526 [2024-12-15 09:54:27.343457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.343470] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:38.526 [2024-12-15 09:54:27.343485] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:38.526 [2024-12-15 09:54:27.343510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:38.526 [2024-12-15 09:54:27.343523] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:38.526 [2024-12-15 09:54:27.343579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:38.526 [2024-12-15 09:54:27.343586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:38.526 [2024-12-15 09:54:27.343594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:38.526 [2024-12-15 09:54:27.343601] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343608] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343614] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:38.526 [2024-12-15 09:54:27.343619] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:38.526 [2024-12-15 09:54:27.343625] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:38.526 [2024-12-15 09:54:27.343632] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:38.526 [2024-12-15 09:54:27.343638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.343644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:38.526 [2024-12-15 09:54:27.343649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:16:38.526 [2024-12-15 09:54:27.343655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.343704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.526 [2024-12-15 09:54:27.343711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:38.526 [2024-12-15 09:54:27.343717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:38.526 [2024-12-15 09:54:27.343722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.526 [2024-12-15 09:54:27.343777] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:38.526 [2024-12-15 09:54:27.343785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:38.526 [2024-12-15 09:54:27.343791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:38.526 [2024-12-15 09:54:27.343808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:38.526 [2024-12-15 09:54:27.343825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.526 [2024-12-15 09:54:27.343836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:38.526 [2024-12-15 09:54:27.343842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:38.526 [2024-12-15 09:54:27.343848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.526 [2024-12-15 09:54:27.343853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:38.526 [2024-12-15 09:54:27.343862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:38.526 [2024-12-15 09:54:27.343867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:38.526 [2024-12-15 09:54:27.343877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:38.526 [2024-12-15 09:54:27.343882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:38.526 [2024-12-15 09:54:27.343892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:38.526 [2024-12-15 09:54:27.343897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:38.526 [2024-12-15 09:54:27.343906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:38.526 [2024-12-15 09:54:27.343922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:38.526 [2024-12-15 09:54:27.343936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:38.526 [2024-12-15 09:54:27.343950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:38.526 [2024-12-15 09:54:27.343959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:38.526 [2024-12-15 09:54:27.343964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:38.526 [2024-12-15 09:54:27.343969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.526 [2024-12-15 09:54:27.343974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:38.526 [2024-12-15 09:54:27.343978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:38.526 [2024-12-15 09:54:27.343984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.526 [2024-12-15 09:54:27.343988] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:38.527 [2024-12-15 09:54:27.343994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:38.527 [2024-12-15 09:54:27.343999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.527 [2024-12-15 09:54:27.344008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.527 [2024-12-15 09:54:27.344015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:38.527 [2024-12-15 09:54:27.344020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:38.527 [2024-12-15 09:54:27.344025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:38.527 [2024-12-15 09:54:27.344030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:38.527 [2024-12-15 09:54:27.344035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:38.527 [2024-12-15 09:54:27.344039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:38.527 [2024-12-15 09:54:27.344045] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:38.527 [2024-12-15 09:54:27.344052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.527 [2024-12-15 09:54:27.344059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:38.527 [2024-12-15 09:54:27.344065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:38.527 [2024-12-15 09:54:27.344070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:38.527 [2024-12-15 09:54:27.344075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:38.527 [2024-12-15 09:54:27.344081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:38.527 [2024-12-15 09:54:27.344086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:38.527 [2024-12-15 09:54:27.344091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:38.527 [2024-12-15 09:54:27.344097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:38.527 [2024-12-15 09:54:27.344103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:38.527 [2024-12-15 09:54:27.344108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:38.527 [2024-12-15 09:54:27.344113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:38.527 [2024-12-15 09:54:27.344118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:38.527 [2024-12-15 09:54:27.344124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:38.527 [2024-12-15 09:54:27.344129] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:38.527 [2024-12-15 09:54:27.344139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.527 [2024-12-15 09:54:27.344145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:38.527 [2024-12-15 09:54:27.344150] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:38.527 [2024-12-15 09:54:27.344155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:38.527 [2024-12-15 09:54:27.344161] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:38.527 [2024-12-15 09:54:27.344167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.344173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:38.527 [2024-12-15 09:54:27.344179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:16:38.527 [2024-12-15 09:54:27.344184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.356115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.356143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:38.527 [2024-12-15 09:54:27.356151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.897 ms 00:16:38.527 [2024-12-15 09:54:27.356157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.356245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.356271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:38.527 [2024-12-15 09:54:27.356279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:38.527 [2024-12-15 09:54:27.356285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.396763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.396795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:38.527 [2024-12-15 09:54:27.396805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.460 ms 00:16:38.527 [2024-12-15 09:54:27.396812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.396867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.396875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:38.527 [2024-12-15 09:54:27.396885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:38.527 [2024-12-15 09:54:27.396890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.397171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.397192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:38.527 [2024-12-15 09:54:27.397199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:16:38.527 [2024-12-15 09:54:27.397205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.397313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.397321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:38.527 [2024-12-15 09:54:27.397328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:38.527 [2024-12-15 09:54:27.397334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.408741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.408766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:38.527 [2024-12-15 09:54:27.408774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.389 ms 00:16:38.527 [2024-12-15 09:54:27.408785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.418842] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:38.527 [2024-12-15 09:54:27.418870] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:38.527 [2024-12-15 09:54:27.418879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.418885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:38.527 [2024-12-15 09:54:27.418892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.020 ms 00:16:38.527 [2024-12-15 09:54:27.418897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.437864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.437903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:38.527 [2024-12-15 09:54:27.437914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.910 ms 00:16:38.527 [2024-12-15 09:54:27.437920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.446879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.446916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:38.527 [2024-12-15 09:54:27.446930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.897 ms 00:16:38.527 [2024-12-15 09:54:27.446935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.455893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.455919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:38.527 [2024-12-15 09:54:27.455926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:16:38.527 [2024-12-15 09:54:27.455931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.456200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.456209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:38.527 [2024-12-15 09:54:27.456216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:16:38.527 [2024-12-15 09:54:27.456223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.501906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.502044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:38.527 [2024-12-15 09:54:27.502059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.666 ms 00:16:38.527 [2024-12-15 09:54:27.502070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.509990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:38.527 [2024-12-15 09:54:27.521397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.521426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:38.527 [2024-12-15 09:54:27.521435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.273 ms 00:16:38.527 [2024-12-15 09:54:27.521441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.521496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.521505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:38.527 [2024-12-15 09:54:27.521514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:38.527 [2024-12-15 09:54:27.521520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.521556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.521562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:38.527 [2024-12-15 09:54:27.521568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:38.527 [2024-12-15 09:54:27.521574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.527 [2024-12-15 09:54:27.522513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.527 [2024-12-15 09:54:27.522539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:38.527 [2024-12-15 09:54:27.522546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:16:38.528 [2024-12-15 09:54:27.522552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.528 [2024-12-15 09:54:27.522576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.528 [2024-12-15 09:54:27.522585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:38.528 [2024-12-15 09:54:27.522591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:38.528 [2024-12-15 09:54:27.522597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.528 [2024-12-15 09:54:27.522621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:38.528 [2024-12-15 09:54:27.522628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.528 [2024-12-15 09:54:27.522634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:38.528 [2024-12-15 09:54:27.522640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:38.528 [2024-12-15 09:54:27.522645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.786 [2024-12-15 09:54:27.541533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.786 [2024-12-15 09:54:27.541560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:38.786 [2024-12-15 09:54:27.541570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:16:38.786 [2024-12-15 09:54:27.541576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.786 [2024-12-15 09:54:27.541644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.786 [2024-12-15 09:54:27.541652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:38.786 [2024-12-15 09:54:27.541659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:38.786 [2024-12-15 09:54:27.541664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.786 [2024-12-15 09:54:27.542271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.786 [2024-12-15 09:54:27.544717] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.319 ms, result 0 00:16:38.786 [2024-12-15 09:54:27.545939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:38.786 [2024-12-15 09:54:27.557183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.729  [2024-12-15T09:54:29.690Z] Copying: 33/256 [MB] (33 MBps) [2024-12-15T09:54:30.631Z] Copying: 49/256 [MB] (15 MBps) [2024-12-15T09:54:31.574Z] Copying: 65/256 [MB] (15 MBps) [2024-12-15T09:54:32.964Z] Copying: 80/256 [MB] (14 MBps) [2024-12-15T09:54:33.908Z] Copying: 98/256 [MB] (18 MBps) [2024-12-15T09:54:34.853Z] Copying: 118/256 [MB] (20 MBps) [2024-12-15T09:54:35.796Z] Copying: 137/256 [MB] (18 MBps) [2024-12-15T09:54:36.740Z] Copying: 149/256 [MB] (12 MBps) [2024-12-15T09:54:37.686Z] Copying: 165/256 [MB] (15 MBps) [2024-12-15T09:54:38.629Z] Copying: 179/256 [MB] (14 MBps) [2024-12-15T09:54:39.573Z] Copying: 195/256 [MB] (15 MBps) [2024-12-15T09:54:40.966Z] Copying: 211/256 [MB] (15 MBps) [2024-12-15T09:54:41.921Z] Copying: 223/256 [MB] (12 MBps) [2024-12-15T09:54:42.866Z] Copying: 236/256 [MB] (13 MBps) [2024-12-15T09:54:43.128Z] Copying: 249/256 [MB] (13 MBps) [2024-12-15T09:54:43.128Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-15 09:54:42.878128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.112 [2024-12-15 09:54:42.888419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.888485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.112 [2024-12-15 09:54:42.888500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:54.112 [2024-12-15 09:54:42.888508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.888534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.112 [2024-12-15 09:54:42.891270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.891315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.112 [2024-12-15 09:54:42.891327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:16:54.112 [2024-12-15 09:54:42.891336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.891617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.891631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.112 [2024-12-15 09:54:42.891640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:16:54.112 [2024-12-15 09:54:42.891653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.895389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.895421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.112 [2024-12-15 09:54:42.895431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:16:54.112 [2024-12-15 09:54:42.895440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.902317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.902362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:54.112 [2024-12-15 09:54:42.902373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.841 ms 00:16:54.112 [2024-12-15 09:54:42.902382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.929340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.929575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.112 [2024-12-15 09:54:42.929600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.877 ms 00:16:54.112 [2024-12-15 09:54:42.929609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.946539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.946590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.112 [2024-12-15 09:54:42.946603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.813 ms 00:16:54.112 [2024-12-15 09:54:42.946612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.946801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.946816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.112 [2024-12-15 09:54:42.946826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:54.112 [2024-12-15 09:54:42.946834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.973484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:42.973533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:54.112 [2024-12-15 09:54:42.973545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.632 ms 00:16:54.112 [2024-12-15 09:54:42.973552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.112 [2024-12-15 09:54:42.999990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.112 [2024-12-15 09:54:43.000041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:54.112 [2024-12-15 09:54:43.000053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.352 ms 00:16:54.113 [2024-12-15 09:54:43.000061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.113 [2024-12-15 09:54:43.026100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.113 [2024-12-15 09:54:43.026150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.113 [2024-12-15 09:54:43.026162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.956 ms 00:16:54.113 [2024-12-15 09:54:43.026169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.113 [2024-12-15 09:54:43.052154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.113 [2024-12-15 09:54:43.052376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.113 [2024-12-15 09:54:43.052398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.845 ms 00:16:54.113 [2024-12-15 09:54:43.052405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.113 [2024-12-15 09:54:43.052480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.113 [2024-12-15 09:54:43.052498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.052990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.113 [2024-12-15 09:54:43.053109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.114 [2024-12-15 09:54:43.053352] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.114 [2024-12-15 09:54:43.053360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:16:54.114 [2024-12-15 09:54:43.053369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.114 [2024-12-15 09:54:43.053377] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.114 [2024-12-15 09:54:43.053384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.114 [2024-12-15 09:54:43.053392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.114 [2024-12-15 09:54:43.053402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.114 [2024-12-15 09:54:43.053415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.114 [2024-12-15 09:54:43.053423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.114 [2024-12-15 09:54:43.053429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.114 [2024-12-15 09:54:43.053435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.114 [2024-12-15 09:54:43.053442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.114 [2024-12-15 09:54:43.053450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.114 [2024-12-15 09:54:43.053459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:16:54.114 [2024-12-15 09:54:43.053466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.066744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.114 [2024-12-15 09:54:43.066793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.114 [2024-12-15 09:54:43.066812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.255 ms 00:16:54.114 [2024-12-15 09:54:43.066820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.067069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.114 [2024-12-15 09:54:43.067082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.114 [2024-12-15 09:54:43.067091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:16:54.114 [2024-12-15 09:54:43.067101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.109187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.114 [2024-12-15 09:54:43.109242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.114 [2024-12-15 09:54:43.109285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.114 [2024-12-15 09:54:43.109294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.109398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.114 [2024-12-15 09:54:43.109411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.114 [2024-12-15 09:54:43.109419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.114 [2024-12-15 09:54:43.109428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.109478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.114 [2024-12-15 09:54:43.109489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.114 [2024-12-15 09:54:43.109497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.114 [2024-12-15 09:54:43.109510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.114 [2024-12-15 09:54:43.109528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.114 [2024-12-15 09:54:43.109539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.114 [2024-12-15 09:54:43.109549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.114 [2024-12-15 09:54:43.109558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.184672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.184708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.376 [2024-12-15 09:54:43.184721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.184729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.213742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.213774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.376 [2024-12-15 09:54:43.213784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.213792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.213840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.213849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.376 [2024-12-15 09:54:43.213857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.213864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.213896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.213904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.376 [2024-12-15 09:54:43.213912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.213919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.214004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.214014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.376 [2024-12-15 09:54:43.214022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.214030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.214060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.214069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.376 [2024-12-15 09:54:43.214077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.214084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.214118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.214127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.376 [2024-12-15 09:54:43.214135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.214142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.214186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.376 [2024-12-15 09:54:43.214197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.376 [2024-12-15 09:54:43.214205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.376 [2024-12-15 09:54:43.214212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.376 [2024-12-15 09:54:43.214370] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.965 ms, result 0 00:16:55.320 00:16:55.320 00:16:55.320 09:54:44 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:55.320 09:54:44 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:55.892 09:54:44 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.892 [2024-12-15 09:54:44.762161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.892 [2024-12-15 09:54:44.762617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72489 ] 00:16:56.154 [2024-12-15 09:54:44.917969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.154 [2024-12-15 09:54:45.139437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.415 [2024-12-15 09:54:45.402706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.415 [2024-12-15 09:54:45.403053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.678 [2024-12-15 09:54:45.559055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.559120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.678 [2024-12-15 09:54:45.559136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:56.678 [2024-12-15 09:54:45.559145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.562292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.562348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.678 [2024-12-15 09:54:45.562360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:16:56.678 [2024-12-15 09:54:45.562369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.562490] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.678 [2024-12-15 09:54:45.563547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.678 [2024-12-15 09:54:45.563603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.563613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.678 [2024-12-15 09:54:45.563625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:16:56.678 [2024-12-15 09:54:45.563633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.565863] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:56.678 [2024-12-15 09:54:45.580747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.580800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:56.678 [2024-12-15 09:54:45.580816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.886 ms 00:16:56.678 [2024-12-15 09:54:45.580824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.580958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.580971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:56.678 [2024-12-15 09:54:45.580980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:56.678 [2024-12-15 09:54:45.580989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.589379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.589420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.678 [2024-12-15 09:54:45.589431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.341 ms 00:16:56.678 [2024-12-15 09:54:45.589446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.589568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.589580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.678 [2024-12-15 09:54:45.589590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:56.678 [2024-12-15 09:54:45.589599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.589630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.589639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.678 [2024-12-15 09:54:45.589647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:56.678 [2024-12-15 09:54:45.589655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.589690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.678 [2024-12-15 09:54:45.593893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.593935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.678 [2024-12-15 09:54:45.593947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:16:56.678 [2024-12-15 09:54:45.593958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.594036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.594047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.678 [2024-12-15 09:54:45.594056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:56.678 [2024-12-15 09:54:45.594064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.594084] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:56.678 [2024-12-15 09:54:45.594107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:56.678 [2024-12-15 09:54:45.594145] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:56.678 [2024-12-15 09:54:45.594166] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:56.678 [2024-12-15 09:54:45.594245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:56.678 [2024-12-15 09:54:45.594286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.678 [2024-12-15 09:54:45.594298] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:56.678 [2024-12-15 09:54:45.594309] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594321] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594329] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.678 [2024-12-15 09:54:45.594338] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.678 [2024-12-15 09:54:45.594347] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:56.678 [2024-12-15 09:54:45.594359] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:56.678 [2024-12-15 09:54:45.594371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.594381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.678 [2024-12-15 09:54:45.594389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:56.678 [2024-12-15 09:54:45.594396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.594465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.678 [2024-12-15 09:54:45.594476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.678 [2024-12-15 09:54:45.594484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:56.678 [2024-12-15 09:54:45.594492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.678 [2024-12-15 09:54:45.594570] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.678 [2024-12-15 09:54:45.594583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.678 [2024-12-15 09:54:45.594591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.678 [2024-12-15 09:54:45.594615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.678 [2024-12-15 09:54:45.594638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.678 [2024-12-15 09:54:45.594652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.678 [2024-12-15 09:54:45.594659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.678 [2024-12-15 09:54:45.594665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.678 [2024-12-15 09:54:45.594676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.678 [2024-12-15 09:54:45.594692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:56.678 [2024-12-15 09:54:45.594699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.678 [2024-12-15 09:54:45.594713] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:56.678 [2024-12-15 09:54:45.594721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:56.678 [2024-12-15 09:54:45.594733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:56.678 [2024-12-15 09:54:45.594740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.678 [2024-12-15 09:54:45.594754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.678 [2024-12-15 09:54:45.594776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.678 [2024-12-15 09:54:45.594795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.678 [2024-12-15 09:54:45.594818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.678 [2024-12-15 09:54:45.594831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.678 [2024-12-15 09:54:45.594838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.678 [2024-12-15 09:54:45.594845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.678 [2024-12-15 09:54:45.594851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.679 [2024-12-15 09:54:45.594858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:56.679 [2024-12-15 09:54:45.594865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.679 [2024-12-15 09:54:45.594873] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.679 [2024-12-15 09:54:45.594881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.679 [2024-12-15 09:54:45.594888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.679 [2024-12-15 09:54:45.594899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.679 [2024-12-15 09:54:45.594908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.679 [2024-12-15 09:54:45.594916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.679 [2024-12-15 09:54:45.594924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.679 [2024-12-15 09:54:45.594931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.679 [2024-12-15 09:54:45.594938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.679 [2024-12-15 09:54:45.594944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.679 [2024-12-15 09:54:45.594952] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.679 [2024-12-15 09:54:45.594964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.679 [2024-12-15 09:54:45.594973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.679 [2024-12-15 09:54:45.594981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:56.679 [2024-12-15 09:54:45.594988] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.679 [2024-12-15 09:54:45.594996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:56.679 [2024-12-15 09:54:45.595003] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:56.679 [2024-12-15 09:54:45.595011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:56.679 [2024-12-15 09:54:45.595018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:56.679 [2024-12-15 09:54:45.595029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:56.679 [2024-12-15 09:54:45.595036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:56.679 [2024-12-15 09:54:45.595043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:56.679 [2024-12-15 09:54:45.595050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:56.679 [2024-12-15 09:54:45.595058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:56.679 [2024-12-15 09:54:45.595066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:56.679 [2024-12-15 09:54:45.595074] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.679 [2024-12-15 09:54:45.595088] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.679 [2024-12-15 09:54:45.595098] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.679 [2024-12-15 09:54:45.595105] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.679 [2024-12-15 09:54:45.595114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.679 [2024-12-15 09:54:45.595124] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.679 [2024-12-15 09:54:45.595133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.595141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.679 [2024-12-15 09:54:45.595149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:16:56.679 [2024-12-15 09:54:45.595156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.613783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.613834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.679 [2024-12-15 09:54:45.613847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.581 ms 00:16:56.679 [2024-12-15 09:54:45.613857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.613990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.614000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.679 [2024-12-15 09:54:45.614009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:56.679 [2024-12-15 09:54:45.614018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.660879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.661105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.679 [2024-12-15 09:54:45.661129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.837 ms 00:16:56.679 [2024-12-15 09:54:45.661138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.661229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.661241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.679 [2024-12-15 09:54:45.661286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.679 [2024-12-15 09:54:45.661295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.661817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.661856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.679 [2024-12-15 09:54:45.661868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:16:56.679 [2024-12-15 09:54:45.661876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.662020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.662046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.679 [2024-12-15 09:54:45.662056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:56.679 [2024-12-15 09:54:45.662065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.679 [2024-12-15 09:54:45.679553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.679 [2024-12-15 09:54:45.679742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.679 [2024-12-15 09:54:45.679762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.464 ms 00:16:56.679 [2024-12-15 09:54:45.679776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.694658] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:57.007 [2024-12-15 09:54:45.694857] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:57.007 [2024-12-15 09:54:45.694878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.694887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:57.007 [2024-12-15 09:54:45.694899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.977 ms 00:16:57.007 [2024-12-15 09:54:45.694907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.721600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.721791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:57.007 [2024-12-15 09:54:45.721812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.606 ms 00:16:57.007 [2024-12-15 09:54:45.721821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.735479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.735529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:57.007 [2024-12-15 09:54:45.735552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.566 ms 00:16:57.007 [2024-12-15 09:54:45.735560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.748885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.749085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:57.007 [2024-12-15 09:54:45.749106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.230 ms 00:16:57.007 [2024-12-15 09:54:45.749115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.749547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.749564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:57.007 [2024-12-15 09:54:45.749575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:16:57.007 [2024-12-15 09:54:45.749588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.816346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.816402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:57.007 [2024-12-15 09:54:45.816415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.732 ms 00:16:57.007 [2024-12-15 09:54:45.816431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.828265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:57.007 [2024-12-15 09:54:45.847654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.847881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:57.007 [2024-12-15 09:54:45.847903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.112 ms 00:16:57.007 [2024-12-15 09:54:45.847912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.848008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.848018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:57.007 [2024-12-15 09:54:45.848032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:57.007 [2024-12-15 09:54:45.848040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.848102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.848114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.007 [2024-12-15 09:54:45.848123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:57.007 [2024-12-15 09:54:45.848131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.849594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.849641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:57.007 [2024-12-15 09:54:45.849652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:16:57.007 [2024-12-15 09:54:45.849660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.849702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.849715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:57.007 [2024-12-15 09:54:45.849724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:57.007 [2024-12-15 09:54:45.849732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.849772] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:57.007 [2024-12-15 09:54:45.849782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.849789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:57.007 [2024-12-15 09:54:45.849799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:57.007 [2024-12-15 09:54:45.849806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.877261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.877314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:57.007 [2024-12-15 09:54:45.877328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.418 ms 00:16:57.007 [2024-12-15 09:54:45.877337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.007 [2024-12-15 09:54:45.877468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.007 [2024-12-15 09:54:45.877481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:57.007 [2024-12-15 09:54:45.877493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:57.007 [2024-12-15 09:54:45.877502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.008 [2024-12-15 09:54:45.878642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.008 [2024-12-15 09:54:45.882455] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.237 ms, result 0 00:16:57.008 [2024-12-15 09:54:45.883839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.008 [2024-12-15 09:54:45.898148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.584  [2024-12-15T09:54:46.600Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-15 09:54:46.300676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.584 [2024-12-15 09:54:46.309163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.309202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.584 [2024-12-15 09:54:46.309213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.584 [2024-12-15 09:54:46.309220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.309241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:57.584 [2024-12-15 09:54:46.311872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.311899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.584 [2024-12-15 09:54:46.311909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:16:57.584 [2024-12-15 09:54:46.311916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.314160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.314192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.584 [2024-12-15 09:54:46.314201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:16:57.584 [2024-12-15 09:54:46.314214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.318668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.318694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.584 [2024-12-15 09:54:46.318703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:16:57.584 [2024-12-15 09:54:46.318710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.325550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.325580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:57.584 [2024-12-15 09:54:46.325589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.812 ms 00:16:57.584 [2024-12-15 09:54:46.325602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.349619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.349654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.584 [2024-12-15 09:54:46.349664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.961 ms 00:16:57.584 [2024-12-15 09:54:46.349671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.364643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.364681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.584 [2024-12-15 09:54:46.364694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.922 ms 00:16:57.584 [2024-12-15 09:54:46.364703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.364855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.364866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.584 [2024-12-15 09:54:46.364875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:57.584 [2024-12-15 09:54:46.364883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.390100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.390142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:57.584 [2024-12-15 09:54:46.390153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.200 ms 00:16:57.584 [2024-12-15 09:54:46.390160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.415579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.415768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:57.584 [2024-12-15 09:54:46.415789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.362 ms 00:16:57.584 [2024-12-15 09:54:46.415797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.441409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.441464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.584 [2024-12-15 09:54:46.441479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.274 ms 00:16:57.584 [2024-12-15 09:54:46.441487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.466548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.584 [2024-12-15 09:54:46.466752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.584 [2024-12-15 09:54:46.466775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.966 ms 00:16:57.584 [2024-12-15 09:54:46.466783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.584 [2024-12-15 09:54:46.466922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.584 [2024-12-15 09:54:46.466955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.466968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.466978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.466986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.466994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.584 [2024-12-15 09:54:46.467353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.585 [2024-12-15 09:54:46.467798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.585 [2024-12-15 09:54:46.467808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:16:57.585 [2024-12-15 09:54:46.467816] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:57.585 [2024-12-15 09:54:46.467824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:57.585 [2024-12-15 09:54:46.467831] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:57.585 [2024-12-15 09:54:46.467839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:57.585 [2024-12-15 09:54:46.467850] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.585 [2024-12-15 09:54:46.467859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.585 [2024-12-15 09:54:46.467867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.585 [2024-12-15 09:54:46.467873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.585 [2024-12-15 09:54:46.467879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.585 [2024-12-15 09:54:46.467887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.585 [2024-12-15 09:54:46.467895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.585 [2024-12-15 09:54:46.467904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:16:57.585 [2024-12-15 09:54:46.467911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.481229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.585 [2024-12-15 09:54:46.481288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.585 [2024-12-15 09:54:46.481307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.298 ms 00:16:57.585 [2024-12-15 09:54:46.481315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.481557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.585 [2024-12-15 09:54:46.481569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.585 [2024-12-15 09:54:46.481578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:16:57.585 [2024-12-15 09:54:46.481585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.522767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.585 [2024-12-15 09:54:46.522811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.585 [2024-12-15 09:54:46.522828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.585 [2024-12-15 09:54:46.522837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.522930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.585 [2024-12-15 09:54:46.522940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.585 [2024-12-15 09:54:46.522949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.585 [2024-12-15 09:54:46.522956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.523004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.585 [2024-12-15 09:54:46.523015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.585 [2024-12-15 09:54:46.523024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.585 [2024-12-15 09:54:46.523037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.585 [2024-12-15 09:54:46.523055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.585 [2024-12-15 09:54:46.523064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.585 [2024-12-15 09:54:46.523071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.585 [2024-12-15 09:54:46.523079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.602871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.602921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.848 [2024-12-15 09:54:46.602939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.602947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.634903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.634948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.848 [2024-12-15 09:54:46.634960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.634969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.848 [2024-12-15 09:54:46.635048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.848 [2024-12-15 09:54:46.635114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.848 [2024-12-15 09:54:46.635250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.848 [2024-12-15 09:54:46.635351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.848 [2024-12-15 09:54:46.635424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.848 [2024-12-15 09:54:46.635501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.848 [2024-12-15 09:54:46.635509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.848 [2024-12-15 09:54:46.635518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.848 [2024-12-15 09:54:46.635677] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.501 ms, result 0 00:16:58.790 00:16:58.790 00:16:58.790 09:54:47 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:58.790 09:54:47 -- ftl/trim.sh@93 -- # svcpid=72525 00:16:58.790 09:54:47 -- ftl/trim.sh@94 -- # waitforlisten 72525 00:16:58.790 09:54:47 -- common/autotest_common.sh@829 -- # '[' -z 72525 ']' 00:16:58.790 09:54:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.790 09:54:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.790 09:54:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.790 09:54:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.790 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:16:58.790 [2024-12-15 09:54:47.630016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:58.790 [2024-12-15 09:54:47.630435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72525 ] 00:16:58.790 [2024-12-15 09:54:47.783337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.050 [2024-12-15 09:54:47.978858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:59.050 [2024-12-15 09:54:47.979090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.433 09:54:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.433 09:54:49 -- common/autotest_common.sh@862 -- # return 0 00:17:00.433 09:54:49 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:00.433 [2024-12-15 09:54:49.365351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.433 [2024-12-15 09:54:49.365422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.695 [2024-12-15 09:54:49.534325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.534536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:00.695 [2024-12-15 09:54:49.534566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:00.695 [2024-12-15 09:54:49.534575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.537620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.537812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.695 [2024-12-15 09:54:49.537837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:17:00.695 [2024-12-15 09:54:49.537845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.538092] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:00.695 [2024-12-15 09:54:49.538888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:00.695 [2024-12-15 09:54:49.538933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.538943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.695 [2024-12-15 09:54:49.538955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:17:00.695 [2024-12-15 09:54:49.538963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.540747] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:00.695 [2024-12-15 09:54:49.554901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.554969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:00.695 [2024-12-15 09:54:49.554983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.162 ms 00:17:00.695 [2024-12-15 09:54:49.554999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.555140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.555160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:00.695 [2024-12-15 09:54:49.555171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:00.695 [2024-12-15 09:54:49.555187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.563099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.563312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.695 [2024-12-15 09:54:49.563331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.850 ms 00:17:00.695 [2024-12-15 09:54:49.563342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.563443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.563455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.695 [2024-12-15 09:54:49.563466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:00.695 [2024-12-15 09:54:49.563476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.563505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.563516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:00.695 [2024-12-15 09:54:49.563524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:00.695 [2024-12-15 09:54:49.563535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.563564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:00.695 [2024-12-15 09:54:49.567747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.567785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.695 [2024-12-15 09:54:49.567797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:17:00.695 [2024-12-15 09:54:49.567805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.567882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.695 [2024-12-15 09:54:49.567892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:00.695 [2024-12-15 09:54:49.567903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:00.695 [2024-12-15 09:54:49.567914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.695 [2024-12-15 09:54:49.567938] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:00.695 [2024-12-15 09:54:49.567959] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:00.696 [2024-12-15 09:54:49.567997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:00.696 [2024-12-15 09:54:49.568014] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:00.696 [2024-12-15 09:54:49.568093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:00.696 [2024-12-15 09:54:49.568104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:00.696 [2024-12-15 09:54:49.568121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:00.696 [2024-12-15 09:54:49.568131] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568142] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568151] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:00.696 [2024-12-15 09:54:49.568162] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:00.696 [2024-12-15 09:54:49.568170] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:00.696 [2024-12-15 09:54:49.568181] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:00.696 [2024-12-15 09:54:49.568189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.696 [2024-12-15 09:54:49.568198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:00.696 [2024-12-15 09:54:49.568206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:00.696 [2024-12-15 09:54:49.568215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.696 [2024-12-15 09:54:49.568307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.696 [2024-12-15 09:54:49.568321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:00.696 [2024-12-15 09:54:49.568329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:00.696 [2024-12-15 09:54:49.568339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.696 [2024-12-15 09:54:49.568418] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:00.696 [2024-12-15 09:54:49.568432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:00.696 [2024-12-15 09:54:49.568440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:00.696 [2024-12-15 09:54:49.568472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:00.696 [2024-12-15 09:54:49.568500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.696 [2024-12-15 09:54:49.568518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:00.696 [2024-12-15 09:54:49.568527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:00.696 [2024-12-15 09:54:49.568535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.696 [2024-12-15 09:54:49.568552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:00.696 [2024-12-15 09:54:49.568558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:00.696 [2024-12-15 09:54:49.568566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:00.696 [2024-12-15 09:54:49.568582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:00.696 [2024-12-15 09:54:49.568589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:00.696 [2024-12-15 09:54:49.568618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:00.696 [2024-12-15 09:54:49.568626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:00.696 [2024-12-15 09:54:49.568644] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:00.696 [2024-12-15 09:54:49.568674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:00.696 [2024-12-15 09:54:49.568698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:00.696 [2024-12-15 09:54:49.568721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:00.696 [2024-12-15 09:54:49.568747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.696 [2024-12-15 09:54:49.568763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:00.696 [2024-12-15 09:54:49.568769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:00.696 [2024-12-15 09:54:49.568779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.696 [2024-12-15 09:54:49.568787] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:00.696 [2024-12-15 09:54:49.568799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:00.696 [2024-12-15 09:54:49.568807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.696 [2024-12-15 09:54:49.568824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:00.696 [2024-12-15 09:54:49.568833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:00.696 [2024-12-15 09:54:49.568841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:00.696 [2024-12-15 09:54:49.568850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:00.696 [2024-12-15 09:54:49.568857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:00.696 [2024-12-15 09:54:49.568866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:00.696 [2024-12-15 09:54:49.568874] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:00.696 [2024-12-15 09:54:49.568889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.696 [2024-12-15 09:54:49.568899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:00.696 [2024-12-15 09:54:49.568908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:00.696 [2024-12-15 09:54:49.568915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:00.696 [2024-12-15 09:54:49.568927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:00.696 [2024-12-15 09:54:49.568935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:00.696 [2024-12-15 09:54:49.568944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:00.696 [2024-12-15 09:54:49.568951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:00.696 [2024-12-15 09:54:49.568961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:00.696 [2024-12-15 09:54:49.568969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:00.696 [2024-12-15 09:54:49.568978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:00.696 [2024-12-15 09:54:49.568984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:00.696 [2024-12-15 09:54:49.568993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:00.696 [2024-12-15 09:54:49.569001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:00.696 [2024-12-15 09:54:49.569011] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:00.696 [2024-12-15 09:54:49.569020] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.696 [2024-12-15 09:54:49.569031] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:00.696 [2024-12-15 09:54:49.569039] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:00.696 [2024-12-15 09:54:49.569048] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:00.696 [2024-12-15 09:54:49.569055] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:00.696 [2024-12-15 09:54:49.569066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.696 [2024-12-15 09:54:49.569076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:00.696 [2024-12-15 09:54:49.569085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:17:00.696 [2024-12-15 09:54:49.569093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.696 [2024-12-15 09:54:49.587150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.696 [2024-12-15 09:54:49.587360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.696 [2024-12-15 09:54:49.587388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.002 ms 00:17:00.696 [2024-12-15 09:54:49.587401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.587538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.587550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.697 [2024-12-15 09:54:49.587561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:00.697 [2024-12-15 09:54:49.587568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.622731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.622920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.697 [2024-12-15 09:54:49.622943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.138 ms 00:17:00.697 [2024-12-15 09:54:49.622952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.623025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.623039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.697 [2024-12-15 09:54:49.623050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:00.697 [2024-12-15 09:54:49.623058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.623612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.623635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.697 [2024-12-15 09:54:49.623649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:00.697 [2024-12-15 09:54:49.623658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.623790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.623801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.697 [2024-12-15 09:54:49.623814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:00.697 [2024-12-15 09:54:49.623822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.641756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.641923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.697 [2024-12-15 09:54:49.641947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.910 ms 00:17:00.697 [2024-12-15 09:54:49.641955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.656291] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:00.697 [2024-12-15 09:54:49.656454] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:00.697 [2024-12-15 09:54:49.656476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.656486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:00.697 [2024-12-15 09:54:49.656498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.399 ms 00:17:00.697 [2024-12-15 09:54:49.656506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.682323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.682367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:00.697 [2024-12-15 09:54:49.682381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.734 ms 00:17:00.697 [2024-12-15 09:54:49.682390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.697 [2024-12-15 09:54:49.695556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.697 [2024-12-15 09:54:49.695737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:00.697 [2024-12-15 09:54:49.695761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.073 ms 00:17:00.697 [2024-12-15 09:54:49.695769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.708961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.709011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:00.956 [2024-12-15 09:54:49.709029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.111 ms 00:17:00.956 [2024-12-15 09:54:49.709036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.709598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.709638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:00.956 [2024-12-15 09:54:49.709656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:00.956 [2024-12-15 09:54:49.709666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.764633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.764675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:00.956 [2024-12-15 09:54:49.764691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.933 ms 00:17:00.956 [2024-12-15 09:54:49.764698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.773146] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:00.956 [2024-12-15 09:54:49.790752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.791029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.956 [2024-12-15 09:54:49.791057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.975 ms 00:17:00.956 [2024-12-15 09:54:49.791074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.791175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.791198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:00.956 [2024-12-15 09:54:49.791214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:00.956 [2024-12-15 09:54:49.791235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.791354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.956 [2024-12-15 09:54:49.791375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:00.956 [2024-12-15 09:54:49.791389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:00.956 [2024-12-15 09:54:49.791406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.956 [2024-12-15 09:54:49.793453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.957 [2024-12-15 09:54:49.793502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:00.957 [2024-12-15 09:54:49.793518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:17:00.957 [2024-12-15 09:54:49.793534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.957 [2024-12-15 09:54:49.793582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.957 [2024-12-15 09:54:49.793603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:00.957 [2024-12-15 09:54:49.793617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:00.957 [2024-12-15 09:54:49.793632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.957 [2024-12-15 09:54:49.793690] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:00.957 [2024-12-15 09:54:49.793714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.957 [2024-12-15 09:54:49.793727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:00.957 [2024-12-15 09:54:49.793743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:00.957 [2024-12-15 09:54:49.793756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.957 [2024-12-15 09:54:49.814045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.957 [2024-12-15 09:54:49.814162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:00.957 [2024-12-15 09:54:49.814212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.247 ms 00:17:00.957 [2024-12-15 09:54:49.814231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.957 [2024-12-15 09:54:49.814322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.957 [2024-12-15 09:54:49.814344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:00.957 [2024-12-15 09:54:49.814365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:00.957 [2024-12-15 09:54:49.814382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.957 [2024-12-15 09:54:49.815274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.957 [2024-12-15 09:54:49.817954] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.733 ms, result 0 00:17:00.957 [2024-12-15 09:54:49.820377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.957 Some configs were skipped because the RPC state that can call them passed over. 00:17:00.957 09:54:49 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:01.214 [2024-12-15 09:54:50.054856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.214 [2024-12-15 09:54:50.054898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:01.215 [2024-12-15 09:54:50.054908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.345 ms 00:17:01.215 [2024-12-15 09:54:50.054916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.215 [2024-12-15 09:54:50.054946] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.437 ms, result 0 00:17:01.215 true 00:17:01.215 09:54:50 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:01.473 [2024-12-15 09:54:50.261590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.473 [2024-12-15 09:54:50.261623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:01.473 [2024-12-15 09:54:50.261634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.477 ms 00:17:01.473 [2024-12-15 09:54:50.261640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.473 [2024-12-15 09:54:50.261670] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.557 ms, result 0 00:17:01.473 true 00:17:01.473 09:54:50 -- ftl/trim.sh@102 -- # killprocess 72525 00:17:01.473 09:54:50 -- common/autotest_common.sh@936 -- # '[' -z 72525 ']' 00:17:01.473 09:54:50 -- common/autotest_common.sh@940 -- # kill -0 72525 00:17:01.473 09:54:50 -- common/autotest_common.sh@941 -- # uname 00:17:01.473 09:54:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.473 09:54:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72525 00:17:01.473 killing process with pid 72525 00:17:01.473 09:54:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.473 09:54:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.473 09:54:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72525' 00:17:01.473 09:54:50 -- common/autotest_common.sh@955 -- # kill 72525 00:17:01.473 09:54:50 -- common/autotest_common.sh@960 -- # wait 72525 00:17:02.041 [2024-12-15 09:54:50.828535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.828582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.041 [2024-12-15 09:54:50.828593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:02.041 [2024-12-15 09:54:50.828620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.828637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:02.041 [2024-12-15 09:54:50.830585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.830607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.041 [2024-12-15 09:54:50.830619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:17:02.041 [2024-12-15 09:54:50.830626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.830847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.830856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.041 [2024-12-15 09:54:50.830863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:17:02.041 [2024-12-15 09:54:50.830869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.834463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.834489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.041 [2024-12-15 09:54:50.834498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:17:02.041 [2024-12-15 09:54:50.834503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.839793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.839953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:02.041 [2024-12-15 09:54:50.839968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:17:02.041 [2024-12-15 09:54:50.839976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.848201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.848227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.041 [2024-12-15 09:54:50.848239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.171 ms 00:17:02.041 [2024-12-15 09:54:50.848245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.854644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.854743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.041 [2024-12-15 09:54:50.854757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.355 ms 00:17:02.041 [2024-12-15 09:54:50.854763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.854867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.854874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.041 [2024-12-15 09:54:50.854882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:02.041 [2024-12-15 09:54:50.854887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.863408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.863432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:02.041 [2024-12-15 09:54:50.863440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.505 ms 00:17:02.041 [2024-12-15 09:54:50.863446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.871679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.871703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:02.041 [2024-12-15 09:54:50.871715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.203 ms 00:17:02.041 [2024-12-15 09:54:50.871720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.879748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.879771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.041 [2024-12-15 09:54:50.879780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.990 ms 00:17:02.041 [2024-12-15 09:54:50.879785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.887635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.041 [2024-12-15 09:54:50.887658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.041 [2024-12-15 09:54:50.887666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.803 ms 00:17:02.041 [2024-12-15 09:54:50.887672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.041 [2024-12-15 09:54:50.887698] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.041 [2024-12-15 09:54:50.887708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.041 [2024-12-15 09:54:50.887894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.887998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.042 [2024-12-15 09:54:50.888397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.042 [2024-12-15 09:54:50.888405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:17:02.042 [2024-12-15 09:54:50.888412] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.042 [2024-12-15 09:54:50.888418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.042 [2024-12-15 09:54:50.888423] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.042 [2024-12-15 09:54:50.888431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.042 [2024-12-15 09:54:50.888436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.042 [2024-12-15 09:54:50.888444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.042 [2024-12-15 09:54:50.888449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.042 [2024-12-15 09:54:50.888455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.042 [2024-12-15 09:54:50.888460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.042 [2024-12-15 09:54:50.888467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.042 [2024-12-15 09:54:50.888472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.042 [2024-12-15 09:54:50.888480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:02.042 [2024-12-15 09:54:50.888486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.042 [2024-12-15 09:54:50.897799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.042 [2024-12-15 09:54:50.897821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.042 [2024-12-15 09:54:50.897831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.295 ms 00:17:02.043 [2024-12-15 09:54:50.897837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.898003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.043 [2024-12-15 09:54:50.898010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.043 [2024-12-15 09:54:50.898020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:02.043 [2024-12-15 09:54:50.898025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.932983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:50.933008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.043 [2024-12-15 09:54:50.933017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:50.933023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.933080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:50.933087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.043 [2024-12-15 09:54:50.933097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:50.933102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.933135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:50.933142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.043 [2024-12-15 09:54:50.933151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:50.933157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.933172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:50.933179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.043 [2024-12-15 09:54:50.933185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:50.933192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:50.992716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:50.992751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.043 [2024-12-15 09:54:50.992760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:50.992766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.043 [2024-12-15 09:54:51.015058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.043 [2024-12-15 09:54:51.015118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.043 [2024-12-15 09:54:51.015160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.043 [2024-12-15 09:54:51.015276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.043 [2024-12-15 09:54:51.015335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.043 [2024-12-15 09:54:51.015388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.043 [2024-12-15 09:54:51.015435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.043 [2024-12-15 09:54:51.015442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.043 [2024-12-15 09:54:51.015448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.043 [2024-12-15 09:54:51.015550] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 186.997 ms, result 0 00:17:02.977 09:54:51 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.977 [2024-12-15 09:54:51.717508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:02.977 [2024-12-15 09:54:51.718162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72584 ] 00:17:02.977 [2024-12-15 09:54:51.867319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.236 [2024-12-15 09:54:52.004032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.236 [2024-12-15 09:54:52.207300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.236 [2024-12-15 09:54:52.207352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.498 [2024-12-15 09:54:52.358812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.358976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.498 [2024-12-15 09:54:52.358995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.498 [2024-12-15 09:54:52.359003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.361691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.361729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.498 [2024-12-15 09:54:52.361739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:17:03.498 [2024-12-15 09:54:52.361747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.361819] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.498 [2024-12-15 09:54:52.362631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.498 [2024-12-15 09:54:52.362733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.362813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.498 [2024-12-15 09:54:52.362840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:17:03.498 [2024-12-15 09:54:52.362860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.364021] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:03.498 [2024-12-15 09:54:52.376758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.376875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:03.498 [2024-12-15 09:54:52.376930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.738 ms 00:17:03.498 [2024-12-15 09:54:52.376953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.377043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.377070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:03.498 [2024-12-15 09:54:52.377090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:03.498 [2024-12-15 09:54:52.377108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.382394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.382499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.498 [2024-12-15 09:54:52.382513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.232 ms 00:17:03.498 [2024-12-15 09:54:52.382526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.382628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.382638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.498 [2024-12-15 09:54:52.382647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:03.498 [2024-12-15 09:54:52.382654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.382679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.382687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.498 [2024-12-15 09:54:52.382695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:03.498 [2024-12-15 09:54:52.382702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.382728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:03.498 [2024-12-15 09:54:52.386267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.386296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.498 [2024-12-15 09:54:52.386306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:17:03.498 [2024-12-15 09:54:52.386315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.386353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.386361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.498 [2024-12-15 09:54:52.386369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:03.498 [2024-12-15 09:54:52.386376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.386393] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:03.498 [2024-12-15 09:54:52.386409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:03.498 [2024-12-15 09:54:52.386445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:03.498 [2024-12-15 09:54:52.386462] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:03.498 [2024-12-15 09:54:52.386534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:03.498 [2024-12-15 09:54:52.386544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.498 [2024-12-15 09:54:52.386553] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:03.498 [2024-12-15 09:54:52.386563] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.498 [2024-12-15 09:54:52.386571] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.498 [2024-12-15 09:54:52.386579] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:03.498 [2024-12-15 09:54:52.386586] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.498 [2024-12-15 09:54:52.386594] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:03.498 [2024-12-15 09:54:52.386603] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:03.498 [2024-12-15 09:54:52.386610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.386617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.498 [2024-12-15 09:54:52.386624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:17:03.498 [2024-12-15 09:54:52.386631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.386706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.498 [2024-12-15 09:54:52.386714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.498 [2024-12-15 09:54:52.386722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:03.498 [2024-12-15 09:54:52.386729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.498 [2024-12-15 09:54:52.386804] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.498 [2024-12-15 09:54:52.386814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.498 [2024-12-15 09:54:52.386821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.498 [2024-12-15 09:54:52.386829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.499 [2024-12-15 09:54:52.386843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:03.499 [2024-12-15 09:54:52.386857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.499 [2024-12-15 09:54:52.386864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.499 [2024-12-15 09:54:52.386877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.499 [2024-12-15 09:54:52.386883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:03.499 [2024-12-15 09:54:52.386890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.499 [2024-12-15 09:54:52.386897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.499 [2024-12-15 09:54:52.386910] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:03.499 [2024-12-15 09:54:52.386916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.499 [2024-12-15 09:54:52.386929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:03.499 [2024-12-15 09:54:52.386935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:03.499 [2024-12-15 09:54:52.386948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:03.499 [2024-12-15 09:54:52.386955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:03.499 [2024-12-15 09:54:52.386961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.499 [2024-12-15 09:54:52.386968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.499 [2024-12-15 09:54:52.386980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.499 [2024-12-15 09:54:52.386986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:03.499 [2024-12-15 09:54:52.386993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.499 [2024-12-15 09:54:52.386999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.499 [2024-12-15 09:54:52.387005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:03.499 [2024-12-15 09:54:52.387011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.499 [2024-12-15 09:54:52.387018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.499 [2024-12-15 09:54:52.387024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:03.499 [2024-12-15 09:54:52.387030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:03.499 [2024-12-15 09:54:52.387037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.499 [2024-12-15 09:54:52.387044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:03.499 [2024-12-15 09:54:52.387050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.499 [2024-12-15 09:54:52.387056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.499 [2024-12-15 09:54:52.387062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:03.499 [2024-12-15 09:54:52.387070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.499 [2024-12-15 09:54:52.387076] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.499 [2024-12-15 09:54:52.387084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.499 [2024-12-15 09:54:52.387090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.499 [2024-12-15 09:54:52.387102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.499 [2024-12-15 09:54:52.387109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.499 [2024-12-15 09:54:52.387117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.499 [2024-12-15 09:54:52.387124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.499 [2024-12-15 09:54:52.387131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.499 [2024-12-15 09:54:52.387137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.499 [2024-12-15 09:54:52.387144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.499 [2024-12-15 09:54:52.387151] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.499 [2024-12-15 09:54:52.387160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.499 [2024-12-15 09:54:52.387168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:03.499 [2024-12-15 09:54:52.387175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:03.499 [2024-12-15 09:54:52.387183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:03.499 [2024-12-15 09:54:52.387190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:03.499 [2024-12-15 09:54:52.387197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:03.499 [2024-12-15 09:54:52.387203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:03.499 [2024-12-15 09:54:52.387210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:03.499 [2024-12-15 09:54:52.387217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:03.499 [2024-12-15 09:54:52.387224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:03.499 [2024-12-15 09:54:52.387230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:03.499 [2024-12-15 09:54:52.387237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:03.499 [2024-12-15 09:54:52.387244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:03.499 [2024-12-15 09:54:52.387269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:03.499 [2024-12-15 09:54:52.387277] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.499 [2024-12-15 09:54:52.387287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.499 [2024-12-15 09:54:52.387294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.499 [2024-12-15 09:54:52.387301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.499 [2024-12-15 09:54:52.387308] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.499 [2024-12-15 09:54:52.387316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.499 [2024-12-15 09:54:52.387324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.387331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.499 [2024-12-15 09:54:52.387338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:17:03.499 [2024-12-15 09:54:52.387345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.402586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.402619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.499 [2024-12-15 09:54:52.402629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.193 ms 00:17:03.499 [2024-12-15 09:54:52.402637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.402750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.402759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:03.499 [2024-12-15 09:54:52.402766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:03.499 [2024-12-15 09:54:52.402774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.446531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.446682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.499 [2024-12-15 09:54:52.446701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.736 ms 00:17:03.499 [2024-12-15 09:54:52.446709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.446781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.446792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.499 [2024-12-15 09:54:52.446806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.499 [2024-12-15 09:54:52.446813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.447191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.447217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.499 [2024-12-15 09:54:52.447227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:17:03.499 [2024-12-15 09:54:52.447235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.447374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.447384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.499 [2024-12-15 09:54:52.447393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:03.499 [2024-12-15 09:54:52.447400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.462783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.499 [2024-12-15 09:54:52.462819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.499 [2024-12-15 09:54:52.462829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.359 ms 00:17:03.499 [2024-12-15 09:54:52.462840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.499 [2024-12-15 09:54:52.476426] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:03.500 [2024-12-15 09:54:52.476469] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:03.500 [2024-12-15 09:54:52.476480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.500 [2024-12-15 09:54:52.476488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:03.500 [2024-12-15 09:54:52.476497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.536 ms 00:17:03.500 [2024-12-15 09:54:52.476505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.500 [2024-12-15 09:54:52.501853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.500 [2024-12-15 09:54:52.502048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:03.500 [2024-12-15 09:54:52.502070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.263 ms 00:17:03.500 [2024-12-15 09:54:52.502079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.760 [2024-12-15 09:54:52.515658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.760 [2024-12-15 09:54:52.515714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:03.760 [2024-12-15 09:54:52.515740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.490 ms 00:17:03.760 [2024-12-15 09:54:52.515747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.760 [2024-12-15 09:54:52.529292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.760 [2024-12-15 09:54:52.529342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:03.760 [2024-12-15 09:54:52.529355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.435 ms 00:17:03.760 [2024-12-15 09:54:52.529362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.760 [2024-12-15 09:54:52.529777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.760 [2024-12-15 09:54:52.529791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.760 [2024-12-15 09:54:52.529801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:03.760 [2024-12-15 09:54:52.529813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.760 [2024-12-15 09:54:52.598087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.760 [2024-12-15 09:54:52.598147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:03.760 [2024-12-15 09:54:52.598161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.248 ms 00:17:03.760 [2024-12-15 09:54:52.598178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.760 [2024-12-15 09:54:52.609759] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:03.760 [2024-12-15 09:54:52.629421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.760 [2024-12-15 09:54:52.629476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.761 [2024-12-15 09:54:52.629488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.099 ms 00:17:03.761 [2024-12-15 09:54:52.629497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.629588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.629602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:03.761 [2024-12-15 09:54:52.629611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:03.761 [2024-12-15 09:54:52.629623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.629680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.629690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.761 [2024-12-15 09:54:52.629699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:03.761 [2024-12-15 09:54:52.629707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.631101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.631147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:03.761 [2024-12-15 09:54:52.631158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:17:03.761 [2024-12-15 09:54:52.631166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.631207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.631216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.761 [2024-12-15 09:54:52.631224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:03.761 [2024-12-15 09:54:52.631232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.631291] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:03.761 [2024-12-15 09:54:52.631301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.631311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:03.761 [2024-12-15 09:54:52.631319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:03.761 [2024-12-15 09:54:52.631327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.657779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.657832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.761 [2024-12-15 09:54:52.657846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.427 ms 00:17:03.761 [2024-12-15 09:54:52.657855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.657974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.761 [2024-12-15 09:54:52.657987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.761 [2024-12-15 09:54:52.657998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:03.761 [2024-12-15 09:54:52.658006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.761 [2024-12-15 09:54:52.659143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.761 [2024-12-15 09:54:52.662862] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.989 ms, result 0 00:17:03.761 [2024-12-15 09:54:52.664406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.761 [2024-12-15 09:54:52.678523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.148  [2024-12-15T09:54:54.736Z] Copying: 20/256 [MB] (20 MBps) [2024-12-15T09:54:56.123Z] Copying: 34/256 [MB] (14 MBps) [2024-12-15T09:54:57.067Z] Copying: 54/256 [MB] (19 MBps) [2024-12-15T09:54:58.010Z] Copying: 69/256 [MB] (14 MBps) [2024-12-15T09:54:58.949Z] Copying: 85/256 [MB] (16 MBps) [2024-12-15T09:54:59.894Z] Copying: 103/256 [MB] (17 MBps) [2024-12-15T09:55:00.836Z] Copying: 116/256 [MB] (12 MBps) [2024-12-15T09:55:01.781Z] Copying: 132/256 [MB] (15 MBps) [2024-12-15T09:55:03.169Z] Copying: 143/256 [MB] (11 MBps) [2024-12-15T09:55:03.741Z] Copying: 157/256 [MB] (14 MBps) [2024-12-15T09:55:05.128Z] Copying: 175/256 [MB] (17 MBps) [2024-12-15T09:55:06.073Z] Copying: 194/256 [MB] (18 MBps) [2024-12-15T09:55:07.017Z] Copying: 211/256 [MB] (17 MBps) [2024-12-15T09:55:07.971Z] Copying: 227/256 [MB] (15 MBps) [2024-12-15T09:55:08.980Z] Copying: 241/256 [MB] (14 MBps) [2024-12-15T09:55:09.243Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-15 09:55:08.995477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.227 [2024-12-15 09:55:09.010222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.010310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.227 [2024-12-15 09:55:09.010327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.227 [2024-12-15 09:55:09.010336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.227 [2024-12-15 09:55:09.010366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.227 [2024-12-15 09:55:09.013432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.013479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.227 [2024-12-15 09:55:09.013491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:17:20.227 [2024-12-15 09:55:09.013500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.227 [2024-12-15 09:55:09.013808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.013821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.227 [2024-12-15 09:55:09.013834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:20.227 [2024-12-15 09:55:09.013843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.227 [2024-12-15 09:55:09.018082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.018111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.227 [2024-12-15 09:55:09.018122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:17:20.227 [2024-12-15 09:55:09.018131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.227 [2024-12-15 09:55:09.025459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.025656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:20.227 [2024-12-15 09:55:09.025680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.288 ms 00:17:20.227 [2024-12-15 09:55:09.025695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.227 [2024-12-15 09:55:09.052994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.227 [2024-12-15 09:55:09.053209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.228 [2024-12-15 09:55:09.053232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.210 ms 00:17:20.228 [2024-12-15 09:55:09.053240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.069772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.069824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.228 [2024-12-15 09:55:09.069838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.431 ms 00:17:20.228 [2024-12-15 09:55:09.069847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.070030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.070044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.228 [2024-12-15 09:55:09.070055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:20.228 [2024-12-15 09:55:09.070063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.096773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.096821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:20.228 [2024-12-15 09:55:09.096834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.691 ms 00:17:20.228 [2024-12-15 09:55:09.096842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.123148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.123196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:20.228 [2024-12-15 09:55:09.123208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.224 ms 00:17:20.228 [2024-12-15 09:55:09.123215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.149449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.149515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.228 [2024-12-15 09:55:09.149530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.136 ms 00:17:20.228 [2024-12-15 09:55:09.149537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.175607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.228 [2024-12-15 09:55:09.175655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.228 [2024-12-15 09:55:09.175668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.949 ms 00:17:20.228 [2024-12-15 09:55:09.175676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.228 [2024-12-15 09:55:09.175756] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.228 [2024-12-15 09:55:09.175775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.175998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.228 [2024-12-15 09:55:09.176294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.229 [2024-12-15 09:55:09.176654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.229 [2024-12-15 09:55:09.176662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b04f4c43-114b-4db2-b39a-d0632519f454 00:17:20.229 [2024-12-15 09:55:09.176671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.229 [2024-12-15 09:55:09.176678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.229 [2024-12-15 09:55:09.176686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.229 [2024-12-15 09:55:09.176694] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.229 [2024-12-15 09:55:09.176708] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.229 [2024-12-15 09:55:09.176717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.229 [2024-12-15 09:55:09.176725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.229 [2024-12-15 09:55:09.176731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.229 [2024-12-15 09:55:09.176737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.229 [2024-12-15 09:55:09.176745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.229 [2024-12-15 09:55:09.176753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.229 [2024-12-15 09:55:09.176762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:17:20.229 [2024-12-15 09:55:09.176771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.190778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.229 [2024-12-15 09:55:09.190978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.229 [2024-12-15 09:55:09.191006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.985 ms 00:17:20.229 [2024-12-15 09:55:09.191014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.191288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.229 [2024-12-15 09:55:09.191301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.229 [2024-12-15 09:55:09.191311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:20.229 [2024-12-15 09:55:09.191319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.233410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.229 [2024-12-15 09:55:09.233612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.229 [2024-12-15 09:55:09.233633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.229 [2024-12-15 09:55:09.233643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.233739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.229 [2024-12-15 09:55:09.233749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.229 [2024-12-15 09:55:09.233758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.229 [2024-12-15 09:55:09.233766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.233827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.229 [2024-12-15 09:55:09.233839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.229 [2024-12-15 09:55:09.233854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.229 [2024-12-15 09:55:09.233863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.229 [2024-12-15 09:55:09.233882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.229 [2024-12-15 09:55:09.233892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.229 [2024-12-15 09:55:09.233901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.229 [2024-12-15 09:55:09.233911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.316153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.316416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.491 [2024-12-15 09:55:09.316443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.316454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.491 [2024-12-15 09:55:09.349430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.491 [2024-12-15 09:55:09.349519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.491 [2024-12-15 09:55:09.349588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.491 [2024-12-15 09:55:09.349723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.491 [2024-12-15 09:55:09.349794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.491 [2024-12-15 09:55:09.349864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.349932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.491 [2024-12-15 09:55:09.349944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.491 [2024-12-15 09:55:09.349955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.491 [2024-12-15 09:55:09.349963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.491 [2024-12-15 09:55:09.350125] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.920 ms, result 0 00:17:21.435 00:17:21.435 00:17:21.435 09:55:10 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.007 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:22.007 09:55:10 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:22.007 09:55:10 -- ftl/trim.sh@109 -- # fio_kill 00:17:22.007 09:55:10 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:22.007 09:55:10 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.007 09:55:10 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:22.007 09:55:10 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:22.007 09:55:10 -- ftl/trim.sh@20 -- # killprocess 72525 00:17:22.007 Process with pid 72525 is not found 00:17:22.007 09:55:10 -- common/autotest_common.sh@936 -- # '[' -z 72525 ']' 00:17:22.007 09:55:10 -- common/autotest_common.sh@940 -- # kill -0 72525 00:17:22.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72525) - No such process 00:17:22.007 09:55:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72525 is not found' 00:17:22.007 00:17:22.007 real 1m15.466s 00:17:22.007 user 1m35.406s 00:17:22.007 sys 0m5.598s 00:17:22.007 ************************************ 00:17:22.007 END TEST ftl_trim 00:17:22.007 ************************************ 00:17:22.007 09:55:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:22.007 09:55:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 09:55:10 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:22.007 09:55:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:22.007 09:55:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.007 09:55:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 ************************************ 00:17:22.007 START TEST ftl_restore 00:17:22.007 ************************************ 00:17:22.007 09:55:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:22.269 * Looking for test storage... 00:17:22.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.269 09:55:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:22.269 09:55:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:22.269 09:55:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:22.269 09:55:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:22.269 09:55:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:22.269 09:55:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:22.269 09:55:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:22.269 09:55:11 -- scripts/common.sh@335 -- # IFS=.-: 00:17:22.269 09:55:11 -- scripts/common.sh@335 -- # read -ra ver1 00:17:22.269 09:55:11 -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.269 09:55:11 -- scripts/common.sh@336 -- # read -ra ver2 00:17:22.269 09:55:11 -- scripts/common.sh@337 -- # local 'op=<' 00:17:22.269 09:55:11 -- scripts/common.sh@339 -- # ver1_l=2 00:17:22.269 09:55:11 -- scripts/common.sh@340 -- # ver2_l=1 00:17:22.269 09:55:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:22.270 09:55:11 -- scripts/common.sh@343 -- # case "$op" in 00:17:22.270 09:55:11 -- scripts/common.sh@344 -- # : 1 00:17:22.270 09:55:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:22.270 09:55:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.270 09:55:11 -- scripts/common.sh@364 -- # decimal 1 00:17:22.270 09:55:11 -- scripts/common.sh@352 -- # local d=1 00:17:22.270 09:55:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.270 09:55:11 -- scripts/common.sh@354 -- # echo 1 00:17:22.270 09:55:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:22.270 09:55:11 -- scripts/common.sh@365 -- # decimal 2 00:17:22.270 09:55:11 -- scripts/common.sh@352 -- # local d=2 00:17:22.270 09:55:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.270 09:55:11 -- scripts/common.sh@354 -- # echo 2 00:17:22.270 09:55:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:22.270 09:55:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:22.270 09:55:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:22.270 09:55:11 -- scripts/common.sh@367 -- # return 0 00:17:22.270 09:55:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.270 09:55:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.270 --rc genhtml_branch_coverage=1 00:17:22.270 --rc genhtml_function_coverage=1 00:17:22.270 --rc genhtml_legend=1 00:17:22.270 --rc geninfo_all_blocks=1 00:17:22.270 --rc geninfo_unexecuted_blocks=1 00:17:22.270 00:17:22.270 ' 00:17:22.270 09:55:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.270 --rc genhtml_branch_coverage=1 00:17:22.270 --rc genhtml_function_coverage=1 00:17:22.270 --rc genhtml_legend=1 00:17:22.270 --rc geninfo_all_blocks=1 00:17:22.270 --rc geninfo_unexecuted_blocks=1 00:17:22.270 00:17:22.270 ' 00:17:22.270 09:55:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.270 --rc genhtml_branch_coverage=1 00:17:22.270 --rc genhtml_function_coverage=1 00:17:22.270 --rc genhtml_legend=1 00:17:22.270 --rc geninfo_all_blocks=1 00:17:22.270 --rc geninfo_unexecuted_blocks=1 00:17:22.270 00:17:22.270 ' 00:17:22.270 09:55:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:22.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.270 --rc genhtml_branch_coverage=1 00:17:22.270 --rc genhtml_function_coverage=1 00:17:22.270 --rc genhtml_legend=1 00:17:22.270 --rc geninfo_all_blocks=1 00:17:22.270 --rc geninfo_unexecuted_blocks=1 00:17:22.270 00:17:22.270 ' 00:17:22.270 09:55:11 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:22.270 09:55:11 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:22.270 09:55:11 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.270 09:55:11 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.270 09:55:11 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:22.270 09:55:11 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:22.270 09:55:11 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.270 09:55:11 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:22.270 09:55:11 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:22.270 09:55:11 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.270 09:55:11 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.270 09:55:11 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:22.270 09:55:11 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:22.270 09:55:11 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.270 09:55:11 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.270 09:55:11 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:22.270 09:55:11 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:22.270 09:55:11 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.270 09:55:11 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.270 09:55:11 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:22.270 09:55:11 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:22.270 09:55:11 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.270 09:55:11 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.270 09:55:11 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.270 09:55:11 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.270 09:55:11 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:22.270 09:55:11 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:22.270 09:55:11 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.270 09:55:11 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.270 09:55:11 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.270 09:55:11 -- ftl/restore.sh@13 -- # mktemp -d 00:17:22.270 09:55:11 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.rkUGzsicYS 00:17:22.270 09:55:11 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.270 09:55:11 -- ftl/restore.sh@16 -- # case $opt in 00:17:22.270 09:55:11 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:22.270 09:55:11 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:22.270 09:55:11 -- ftl/restore.sh@23 -- # shift 2 00:17:22.270 09:55:11 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:22.270 09:55:11 -- ftl/restore.sh@25 -- # timeout=240 00:17:22.270 09:55:11 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:22.270 09:55:11 -- ftl/restore.sh@39 -- # svcpid=72848 00:17:22.270 09:55:11 -- ftl/restore.sh@41 -- # waitforlisten 72848 00:17:22.270 09:55:11 -- common/autotest_common.sh@829 -- # '[' -z 72848 ']' 00:17:22.270 09:55:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.270 09:55:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.270 09:55:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.270 09:55:11 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.270 09:55:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.270 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.270 [2024-12-15 09:55:11.269094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.270 [2024-12-15 09:55:11.269494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72848 ] 00:17:22.532 [2024-12-15 09:55:11.422640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.793 [2024-12-15 09:55:11.652383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.793 [2024-12-15 09:55:11.652841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.179 09:55:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.179 09:55:12 -- common/autotest_common.sh@862 -- # return 0 00:17:24.179 09:55:12 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:24.179 09:55:12 -- ftl/common.sh@54 -- # local name=nvme0 00:17:24.179 09:55:12 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:24.179 09:55:12 -- ftl/common.sh@56 -- # local size=103424 00:17:24.179 09:55:12 -- ftl/common.sh@59 -- # local base_bdev 00:17:24.179 09:55:12 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:24.179 09:55:13 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:24.179 09:55:13 -- ftl/common.sh@62 -- # local base_size 00:17:24.179 09:55:13 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:24.179 09:55:13 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:24.180 09:55:13 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:24.180 09:55:13 -- common/autotest_common.sh@1369 -- # local bs 00:17:24.180 09:55:13 -- common/autotest_common.sh@1370 -- # local nb 00:17:24.180 09:55:13 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:24.440 09:55:13 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:24.440 { 00:17:24.440 "name": "nvme0n1", 00:17:24.440 "aliases": [ 00:17:24.441 "37245f5a-b64b-4a16-95d3-eb33690d6893" 00:17:24.441 ], 00:17:24.441 "product_name": "NVMe disk", 00:17:24.441 "block_size": 4096, 00:17:24.441 "num_blocks": 1310720, 00:17:24.441 "uuid": "37245f5a-b64b-4a16-95d3-eb33690d6893", 00:17:24.441 "assigned_rate_limits": { 00:17:24.441 "rw_ios_per_sec": 0, 00:17:24.441 "rw_mbytes_per_sec": 0, 00:17:24.441 "r_mbytes_per_sec": 0, 00:17:24.441 "w_mbytes_per_sec": 0 00:17:24.441 }, 00:17:24.441 "claimed": true, 00:17:24.441 "claim_type": "read_many_write_one", 00:17:24.441 "zoned": false, 00:17:24.441 "supported_io_types": { 00:17:24.441 "read": true, 00:17:24.441 "write": true, 00:17:24.441 "unmap": true, 00:17:24.441 "write_zeroes": true, 00:17:24.441 "flush": true, 00:17:24.441 "reset": true, 00:17:24.441 "compare": true, 00:17:24.441 "compare_and_write": false, 00:17:24.441 "abort": true, 00:17:24.441 "nvme_admin": true, 00:17:24.441 "nvme_io": true 00:17:24.441 }, 00:17:24.441 "driver_specific": { 00:17:24.441 "nvme": [ 00:17:24.441 { 00:17:24.441 "pci_address": "0000:00:07.0", 00:17:24.441 "trid": { 00:17:24.441 "trtype": "PCIe", 00:17:24.441 "traddr": "0000:00:07.0" 00:17:24.441 }, 00:17:24.441 "ctrlr_data": { 00:17:24.441 "cntlid": 0, 00:17:24.441 "vendor_id": "0x1b36", 00:17:24.441 "model_number": "QEMU NVMe Ctrl", 00:17:24.441 "serial_number": "12341", 00:17:24.441 "firmware_revision": "8.0.0", 00:17:24.441 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:24.441 "oacs": { 00:17:24.441 "security": 0, 00:17:24.441 "format": 1, 00:17:24.441 "firmware": 0, 00:17:24.441 "ns_manage": 1 00:17:24.441 }, 00:17:24.441 "multi_ctrlr": false, 00:17:24.441 "ana_reporting": false 00:17:24.441 }, 00:17:24.441 "vs": { 00:17:24.441 "nvme_version": "1.4" 00:17:24.441 }, 00:17:24.441 "ns_data": { 00:17:24.441 "id": 1, 00:17:24.441 "can_share": false 00:17:24.441 } 00:17:24.441 } 00:17:24.441 ], 00:17:24.441 "mp_policy": "active_passive" 00:17:24.441 } 00:17:24.441 } 00:17:24.441 ]' 00:17:24.441 09:55:13 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:24.441 09:55:13 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:24.441 09:55:13 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:24.441 09:55:13 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:24.441 09:55:13 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:24.441 09:55:13 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:24.441 09:55:13 -- ftl/common.sh@63 -- # base_size=5120 00:17:24.441 09:55:13 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:24.441 09:55:13 -- ftl/common.sh@67 -- # clear_lvols 00:17:24.441 09:55:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:24.441 09:55:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:24.702 09:55:13 -- ftl/common.sh@28 -- # stores=ce76b67e-461b-492a-bff4-4a36ad565359 00:17:24.702 09:55:13 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:24.702 09:55:13 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce76b67e-461b-492a-bff4-4a36ad565359 00:17:24.963 09:55:13 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:25.223 09:55:13 -- ftl/common.sh@68 -- # lvs=d23bd721-b694-4582-a2c6-faa0c9305525 00:17:25.223 09:55:13 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d23bd721-b694-4582-a2c6-faa0c9305525 00:17:25.223 09:55:14 -- ftl/restore.sh@43 -- # split_bdev=85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.223 09:55:14 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:25.223 09:55:14 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.223 09:55:14 -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.223 09:55:14 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:25.223 09:55:14 -- ftl/common.sh@37 -- # local base_bdev=85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.223 09:55:14 -- ftl/common.sh@38 -- # local cache_size= 00:17:25.223 09:55:14 -- ftl/common.sh@41 -- # get_bdev_size 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.223 09:55:14 -- common/autotest_common.sh@1367 -- # local bdev_name=85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.223 09:55:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:25.223 09:55:14 -- common/autotest_common.sh@1369 -- # local bs 00:17:25.223 09:55:14 -- common/autotest_common.sh@1370 -- # local nb 00:17:25.223 09:55:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.482 09:55:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:25.482 { 00:17:25.482 "name": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:25.482 "aliases": [ 00:17:25.482 "lvs/nvme0n1p0" 00:17:25.482 ], 00:17:25.482 "product_name": "Logical Volume", 00:17:25.482 "block_size": 4096, 00:17:25.482 "num_blocks": 26476544, 00:17:25.482 "uuid": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:25.482 "assigned_rate_limits": { 00:17:25.482 "rw_ios_per_sec": 0, 00:17:25.482 "rw_mbytes_per_sec": 0, 00:17:25.482 "r_mbytes_per_sec": 0, 00:17:25.482 "w_mbytes_per_sec": 0 00:17:25.482 }, 00:17:25.482 "claimed": false, 00:17:25.482 "zoned": false, 00:17:25.482 "supported_io_types": { 00:17:25.482 "read": true, 00:17:25.482 "write": true, 00:17:25.482 "unmap": true, 00:17:25.482 "write_zeroes": true, 00:17:25.482 "flush": false, 00:17:25.482 "reset": true, 00:17:25.482 "compare": false, 00:17:25.482 "compare_and_write": false, 00:17:25.482 "abort": false, 00:17:25.482 "nvme_admin": false, 00:17:25.482 "nvme_io": false 00:17:25.482 }, 00:17:25.482 "driver_specific": { 00:17:25.482 "lvol": { 00:17:25.482 "lvol_store_uuid": "d23bd721-b694-4582-a2c6-faa0c9305525", 00:17:25.482 "base_bdev": "nvme0n1", 00:17:25.482 "thin_provision": true, 00:17:25.482 "snapshot": false, 00:17:25.482 "clone": false, 00:17:25.482 "esnap_clone": false 00:17:25.482 } 00:17:25.482 } 00:17:25.482 } 00:17:25.482 ]' 00:17:25.482 09:55:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:25.482 09:55:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:25.482 09:55:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:25.482 09:55:14 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:25.482 09:55:14 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:25.482 09:55:14 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:25.482 09:55:14 -- ftl/common.sh@41 -- # local base_size=5171 00:17:25.482 09:55:14 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:25.482 09:55:14 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:25.740 09:55:14 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.740 09:55:14 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.740 09:55:14 -- ftl/common.sh@48 -- # get_bdev_size 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.740 09:55:14 -- common/autotest_common.sh@1367 -- # local bdev_name=85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.740 09:55:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:25.740 09:55:14 -- common/autotest_common.sh@1369 -- # local bs 00:17:25.740 09:55:14 -- common/autotest_common.sh@1370 -- # local nb 00:17:25.740 09:55:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:25.998 09:55:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:25.998 { 00:17:25.998 "name": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:25.998 "aliases": [ 00:17:25.998 "lvs/nvme0n1p0" 00:17:25.998 ], 00:17:25.998 "product_name": "Logical Volume", 00:17:25.998 "block_size": 4096, 00:17:25.998 "num_blocks": 26476544, 00:17:25.998 "uuid": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:25.998 "assigned_rate_limits": { 00:17:25.998 "rw_ios_per_sec": 0, 00:17:25.998 "rw_mbytes_per_sec": 0, 00:17:25.998 "r_mbytes_per_sec": 0, 00:17:25.998 "w_mbytes_per_sec": 0 00:17:25.998 }, 00:17:25.998 "claimed": false, 00:17:25.998 "zoned": false, 00:17:25.998 "supported_io_types": { 00:17:25.998 "read": true, 00:17:25.998 "write": true, 00:17:25.998 "unmap": true, 00:17:25.998 "write_zeroes": true, 00:17:25.998 "flush": false, 00:17:25.998 "reset": true, 00:17:25.998 "compare": false, 00:17:25.998 "compare_and_write": false, 00:17:25.998 "abort": false, 00:17:25.998 "nvme_admin": false, 00:17:25.998 "nvme_io": false 00:17:25.998 }, 00:17:25.998 "driver_specific": { 00:17:25.998 "lvol": { 00:17:25.998 "lvol_store_uuid": "d23bd721-b694-4582-a2c6-faa0c9305525", 00:17:25.998 "base_bdev": "nvme0n1", 00:17:25.998 "thin_provision": true, 00:17:25.998 "snapshot": false, 00:17:25.998 "clone": false, 00:17:25.998 "esnap_clone": false 00:17:25.998 } 00:17:25.998 } 00:17:25.998 } 00:17:25.998 ]' 00:17:25.998 09:55:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:25.998 09:55:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:25.998 09:55:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:25.998 09:55:14 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:25.998 09:55:14 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:25.998 09:55:14 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:25.998 09:55:14 -- ftl/common.sh@48 -- # cache_size=5171 00:17:25.998 09:55:14 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:26.262 09:55:15 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:26.262 09:55:15 -- ftl/restore.sh@48 -- # get_bdev_size 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:26.262 09:55:15 -- common/autotest_common.sh@1367 -- # local bdev_name=85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:26.262 09:55:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:26.262 09:55:15 -- common/autotest_common.sh@1369 -- # local bs 00:17:26.262 09:55:15 -- common/autotest_common.sh@1370 -- # local nb 00:17:26.262 09:55:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85a47adf-8746-4418-b8b4-8a6f10d7880d 00:17:26.525 09:55:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:26.525 { 00:17:26.525 "name": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:26.525 "aliases": [ 00:17:26.525 "lvs/nvme0n1p0" 00:17:26.525 ], 00:17:26.525 "product_name": "Logical Volume", 00:17:26.525 "block_size": 4096, 00:17:26.525 "num_blocks": 26476544, 00:17:26.525 "uuid": "85a47adf-8746-4418-b8b4-8a6f10d7880d", 00:17:26.525 "assigned_rate_limits": { 00:17:26.525 "rw_ios_per_sec": 0, 00:17:26.525 "rw_mbytes_per_sec": 0, 00:17:26.525 "r_mbytes_per_sec": 0, 00:17:26.525 "w_mbytes_per_sec": 0 00:17:26.525 }, 00:17:26.525 "claimed": false, 00:17:26.525 "zoned": false, 00:17:26.525 "supported_io_types": { 00:17:26.525 "read": true, 00:17:26.525 "write": true, 00:17:26.525 "unmap": true, 00:17:26.525 "write_zeroes": true, 00:17:26.525 "flush": false, 00:17:26.525 "reset": true, 00:17:26.525 "compare": false, 00:17:26.525 "compare_and_write": false, 00:17:26.525 "abort": false, 00:17:26.525 "nvme_admin": false, 00:17:26.525 "nvme_io": false 00:17:26.525 }, 00:17:26.525 "driver_specific": { 00:17:26.525 "lvol": { 00:17:26.525 "lvol_store_uuid": "d23bd721-b694-4582-a2c6-faa0c9305525", 00:17:26.525 "base_bdev": "nvme0n1", 00:17:26.525 "thin_provision": true, 00:17:26.525 "snapshot": false, 00:17:26.525 "clone": false, 00:17:26.525 "esnap_clone": false 00:17:26.525 } 00:17:26.525 } 00:17:26.525 } 00:17:26.525 ]' 00:17:26.525 09:55:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:26.525 09:55:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:26.525 09:55:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:26.525 09:55:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:26.526 09:55:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:26.526 09:55:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:26.526 09:55:15 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:26.526 09:55:15 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 85a47adf-8746-4418-b8b4-8a6f10d7880d --l2p_dram_limit 10' 00:17:26.526 09:55:15 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:26.526 09:55:15 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:26.526 09:55:15 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:26.526 09:55:15 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:26.526 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:26.526 09:55:15 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 85a47adf-8746-4418-b8b4-8a6f10d7880d --l2p_dram_limit 10 -c nvc0n1p0 00:17:26.787 [2024-12-15 09:55:15.581573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.581614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.787 [2024-12-15 09:55:15.581626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.787 [2024-12-15 09:55:15.581635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.581679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.581686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.787 [2024-12-15 09:55:15.581694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:26.787 [2024-12-15 09:55:15.581700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.581717] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.787 [2024-12-15 09:55:15.582334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.787 [2024-12-15 09:55:15.582351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.582357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.787 [2024-12-15 09:55:15.582365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:17:26.787 [2024-12-15 09:55:15.582371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.582399] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 47932767-df3c-47cb-a32e-4820bc91e495 00:17:26.787 [2024-12-15 09:55:15.583375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.583481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:26.787 [2024-12-15 09:55:15.583495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:26.787 [2024-12-15 09:55:15.583503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.588281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.588308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.787 [2024-12-15 09:55:15.588316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:17:26.787 [2024-12-15 09:55:15.588323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.588392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.588400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.787 [2024-12-15 09:55:15.588407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:26.787 [2024-12-15 09:55:15.588417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.588453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.588464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.787 [2024-12-15 09:55:15.588470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:26.787 [2024-12-15 09:55:15.588477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.588495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.787 [2024-12-15 09:55:15.591443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.591468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.787 [2024-12-15 09:55:15.591476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:17:26.787 [2024-12-15 09:55:15.591482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.591512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.591518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.787 [2024-12-15 09:55:15.591526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.787 [2024-12-15 09:55:15.591531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.591551] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:26.787 [2024-12-15 09:55:15.591637] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:26.787 [2024-12-15 09:55:15.591649] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.787 [2024-12-15 09:55:15.591657] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:26.787 [2024-12-15 09:55:15.591666] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.787 [2024-12-15 09:55:15.591673] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.787 [2024-12-15 09:55:15.591682] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:26.787 [2024-12-15 09:55:15.591694] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.787 [2024-12-15 09:55:15.591700] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:26.787 [2024-12-15 09:55:15.591706] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:26.787 [2024-12-15 09:55:15.591713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.591718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.787 [2024-12-15 09:55:15.591726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:26.787 [2024-12-15 09:55:15.591732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.591780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.787 [2024-12-15 09:55:15.591786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.787 [2024-12-15 09:55:15.591792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.787 [2024-12-15 09:55:15.591799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.787 [2024-12-15 09:55:15.591854] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.787 [2024-12-15 09:55:15.591861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.787 [2024-12-15 09:55:15.591869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.787 [2024-12-15 09:55:15.591875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.787 [2024-12-15 09:55:15.591887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:26.787 [2024-12-15 09:55:15.591898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.787 [2024-12-15 09:55:15.591904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.787 [2024-12-15 09:55:15.591915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.787 [2024-12-15 09:55:15.591921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:26.787 [2024-12-15 09:55:15.591928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.787 [2024-12-15 09:55:15.591933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.787 [2024-12-15 09:55:15.591939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:26.787 [2024-12-15 09:55:15.591944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.787 [2024-12-15 09:55:15.591959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:26.787 [2024-12-15 09:55:15.591965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:26.787 [2024-12-15 09:55:15.591976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:26.787 [2024-12-15 09:55:15.591981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:26.787 [2024-12-15 09:55:15.591988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.787 [2024-12-15 09:55:15.591993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:26.787 [2024-12-15 09:55:15.591999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:26.787 [2024-12-15 09:55:15.592004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.788 [2024-12-15 09:55:15.592011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:26.788 [2024-12-15 09:55:15.592015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:26.788 [2024-12-15 09:55:15.592022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.788 [2024-12-15 09:55:15.592026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:26.788 [2024-12-15 09:55:15.592032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:26.788 [2024-12-15 09:55:15.592037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.788 [2024-12-15 09:55:15.592045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:26.788 [2024-12-15 09:55:15.592050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:26.788 [2024-12-15 09:55:15.592056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.788 [2024-12-15 09:55:15.592061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:26.788 [2024-12-15 09:55:15.592067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.788 [2024-12-15 09:55:15.592072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.788 [2024-12-15 09:55:15.592079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:26.788 [2024-12-15 09:55:15.592083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.788 [2024-12-15 09:55:15.592089] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.788 [2024-12-15 09:55:15.592095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.788 [2024-12-15 09:55:15.592102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.788 [2024-12-15 09:55:15.592107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.788 [2024-12-15 09:55:15.592116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.788 [2024-12-15 09:55:15.592121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.788 [2024-12-15 09:55:15.592127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.788 [2024-12-15 09:55:15.592133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.788 [2024-12-15 09:55:15.592140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.788 [2024-12-15 09:55:15.592146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.788 [2024-12-15 09:55:15.592154] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.788 [2024-12-15 09:55:15.592161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.788 [2024-12-15 09:55:15.592168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:26.788 [2024-12-15 09:55:15.592174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:26.788 [2024-12-15 09:55:15.592180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:26.788 [2024-12-15 09:55:15.592186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:26.788 [2024-12-15 09:55:15.592193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:26.788 [2024-12-15 09:55:15.592198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:26.788 [2024-12-15 09:55:15.592205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:26.788 [2024-12-15 09:55:15.592210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:26.788 [2024-12-15 09:55:15.592216] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:26.788 [2024-12-15 09:55:15.592222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:26.788 [2024-12-15 09:55:15.592228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:26.788 [2024-12-15 09:55:15.592234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:26.788 [2024-12-15 09:55:15.592243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:26.788 [2024-12-15 09:55:15.592248] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.788 [2024-12-15 09:55:15.592269] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.788 [2024-12-15 09:55:15.592276] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.788 [2024-12-15 09:55:15.592283] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.788 [2024-12-15 09:55:15.592288] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.788 [2024-12-15 09:55:15.592295] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.788 [2024-12-15 09:55:15.592301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.592308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.788 [2024-12-15 09:55:15.592314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:17:26.788 [2024-12-15 09:55:15.592321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.604239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.604282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.788 [2024-12-15 09:55:15.604291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.876 ms 00:17:26.788 [2024-12-15 09:55:15.604298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.604368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.604377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.788 [2024-12-15 09:55:15.604385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:26.788 [2024-12-15 09:55:15.604392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.628297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.628327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.788 [2024-12-15 09:55:15.628335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:17:26.788 [2024-12-15 09:55:15.628343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.628367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.628375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.788 [2024-12-15 09:55:15.628381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:26.788 [2024-12-15 09:55:15.628390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.628703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.628718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.788 [2024-12-15 09:55:15.628725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:26.788 [2024-12-15 09:55:15.628732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.628819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.628828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.788 [2024-12-15 09:55:15.628834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:26.788 [2024-12-15 09:55:15.628841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.640930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.640956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.788 [2024-12-15 09:55:15.640964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.075 ms 00:17:26.788 [2024-12-15 09:55:15.640971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.649936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:26.788 [2024-12-15 09:55:15.652196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.652221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.788 [2024-12-15 09:55:15.652231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.170 ms 00:17:26.788 [2024-12-15 09:55:15.652237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.712740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.788 [2024-12-15 09:55:15.712772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:26.788 [2024-12-15 09:55:15.712783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.468 ms 00:17:26.788 [2024-12-15 09:55:15.712790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.788 [2024-12-15 09:55:15.712827] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:26.788 [2024-12-15 09:55:15.712836] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:30.993 [2024-12-15 09:55:19.314187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.314309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:30.993 [2024-12-15 09:55:19.314333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3601.332 ms 00:17:30.993 [2024-12-15 09:55:19.314342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.314573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.314586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.993 [2024-12-15 09:55:19.314601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:30.993 [2024-12-15 09:55:19.314610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.341576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.341647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:30.993 [2024-12-15 09:55:19.341664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.901 ms 00:17:30.993 [2024-12-15 09:55:19.341672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.367661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.367883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:30.993 [2024-12-15 09:55:19.367918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.924 ms 00:17:30.993 [2024-12-15 09:55:19.367926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.368295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.368308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.993 [2024-12-15 09:55:19.368320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:30.993 [2024-12-15 09:55:19.368328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.441646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.441703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:30.993 [2024-12-15 09:55:19.441721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.251 ms 00:17:30.993 [2024-12-15 09:55:19.441729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.470233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.470298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:30.993 [2024-12-15 09:55:19.470314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.440 ms 00:17:30.993 [2024-12-15 09:55:19.470322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.471807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.471860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:30.993 [2024-12-15 09:55:19.471876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:17:30.993 [2024-12-15 09:55:19.471884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.499724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.499778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.993 [2024-12-15 09:55:19.499793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.773 ms 00:17:30.993 [2024-12-15 09:55:19.499801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.499868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.499878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.993 [2024-12-15 09:55:19.499889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:30.993 [2024-12-15 09:55:19.499897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.500008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.500019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.993 [2024-12-15 09:55:19.500030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:30.993 [2024-12-15 09:55:19.500037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.501247] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3919.156 ms, result 0 00:17:30.993 { 00:17:30.993 "name": "ftl0", 00:17:30.993 "uuid": "47932767-df3c-47cb-a32e-4820bc91e495" 00:17:30.993 } 00:17:30.993 09:55:19 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:30.993 09:55:19 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:30.993 09:55:19 -- ftl/restore.sh@63 -- # echo ']}' 00:17:30.993 09:55:19 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:30.993 [2024-12-15 09:55:19.924546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.924831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.993 [2024-12-15 09:55:19.924858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.993 [2024-12-15 09:55:19.924870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.924907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.993 [2024-12-15 09:55:19.927801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.927848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.993 [2024-12-15 09:55:19.927863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:17:30.993 [2024-12-15 09:55:19.927879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.928162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.928173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.993 [2024-12-15 09:55:19.928185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:30.993 [2024-12-15 09:55:19.928194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.931764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.931886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.993 [2024-12-15 09:55:19.932004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:17:30.993 [2024-12-15 09:55:19.932032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.938213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.938389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:30.993 [2024-12-15 09:55:19.938414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:17:30.993 [2024-12-15 09:55:19.938423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.965330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.965531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.993 [2024-12-15 09:55:19.965557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.803 ms 00:17:30.993 [2024-12-15 09:55:19.965565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.983680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.983737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.993 [2024-12-15 09:55:19.983755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.061 ms 00:17:30.993 [2024-12-15 09:55:19.983764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.993 [2024-12-15 09:55:19.983945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.993 [2024-12-15 09:55:19.983957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.993 [2024-12-15 09:55:19.983969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:30.993 [2024-12-15 09:55:19.983980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.255 [2024-12-15 09:55:20.011018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.255 [2024-12-15 09:55:20.011075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:31.256 [2024-12-15 09:55:20.011092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.008 ms 00:17:31.256 [2024-12-15 09:55:20.011100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.256 [2024-12-15 09:55:20.038525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.256 [2024-12-15 09:55:20.038741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:31.256 [2024-12-15 09:55:20.038770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.357 ms 00:17:31.256 [2024-12-15 09:55:20.038779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.256 [2024-12-15 09:55:20.064980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.256 [2024-12-15 09:55:20.065034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:31.256 [2024-12-15 09:55:20.065050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.143 ms 00:17:31.256 [2024-12-15 09:55:20.065058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.256 [2024-12-15 09:55:20.091399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.256 [2024-12-15 09:55:20.091451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:31.256 [2024-12-15 09:55:20.091466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.216 ms 00:17:31.256 [2024-12-15 09:55:20.091474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.256 [2024-12-15 09:55:20.091533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:31.256 [2024-12-15 09:55:20.091554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.091997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:31.256 [2024-12-15 09:55:20.092205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:31.257 [2024-12-15 09:55:20.092554] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:31.257 [2024-12-15 09:55:20.092565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47932767-df3c-47cb-a32e-4820bc91e495 00:17:31.257 [2024-12-15 09:55:20.092573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:31.257 [2024-12-15 09:55:20.092582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:31.257 [2024-12-15 09:55:20.092618] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:31.257 [2024-12-15 09:55:20.092630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:31.257 [2024-12-15 09:55:20.092637] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:31.257 [2024-12-15 09:55:20.092648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:31.257 [2024-12-15 09:55:20.092656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:31.257 [2024-12-15 09:55:20.092665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:31.257 [2024-12-15 09:55:20.092671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:31.257 [2024-12-15 09:55:20.092684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.257 [2024-12-15 09:55:20.092692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:31.257 [2024-12-15 09:55:20.092706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:17:31.257 [2024-12-15 09:55:20.092714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.106874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.257 [2024-12-15 09:55:20.106921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:31.257 [2024-12-15 09:55:20.106935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:17:31.257 [2024-12-15 09:55:20.106944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.107167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.257 [2024-12-15 09:55:20.107179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:31.257 [2024-12-15 09:55:20.107190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:17:31.257 [2024-12-15 09:55:20.107198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.157401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.257 [2024-12-15 09:55:20.157454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.257 [2024-12-15 09:55:20.157469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.257 [2024-12-15 09:55:20.157478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.157561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.257 [2024-12-15 09:55:20.157573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.257 [2024-12-15 09:55:20.157584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.257 [2024-12-15 09:55:20.157592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.157676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.257 [2024-12-15 09:55:20.157686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.257 [2024-12-15 09:55:20.157696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.257 [2024-12-15 09:55:20.157704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.157725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.257 [2024-12-15 09:55:20.157734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.257 [2024-12-15 09:55:20.157747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.257 [2024-12-15 09:55:20.157755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.257 [2024-12-15 09:55:20.241425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.257 [2024-12-15 09:55:20.241480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.257 [2024-12-15 09:55:20.241496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.257 [2024-12-15 09:55:20.241505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.518 [2024-12-15 09:55:20.273545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.518 [2024-12-15 09:55:20.273598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.518 [2024-12-15 09:55:20.273611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.518 [2024-12-15 09:55:20.273620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.518 [2024-12-15 09:55:20.273698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.273708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.519 [2024-12-15 09:55:20.273719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.273727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.273776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.273786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.519 [2024-12-15 09:55:20.273796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.273807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.273922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.273932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.519 [2024-12-15 09:55:20.273943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.273951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.273989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.273999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.519 [2024-12-15 09:55:20.274010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.274017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.274066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.274076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.519 [2024-12-15 09:55:20.274087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.274094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.274147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.519 [2024-12-15 09:55:20.274157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.519 [2024-12-15 09:55:20.274167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.519 [2024-12-15 09:55:20.274178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.519 [2024-12-15 09:55:20.274366] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.772 ms, result 0 00:17:31.519 true 00:17:31.519 09:55:20 -- ftl/restore.sh@66 -- # killprocess 72848 00:17:31.519 09:55:20 -- common/autotest_common.sh@936 -- # '[' -z 72848 ']' 00:17:31.519 09:55:20 -- common/autotest_common.sh@940 -- # kill -0 72848 00:17:31.519 09:55:20 -- common/autotest_common.sh@941 -- # uname 00:17:31.519 09:55:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.519 09:55:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72848 00:17:31.519 killing process with pid 72848 00:17:31.519 09:55:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.519 09:55:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.519 09:55:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72848' 00:17:31.519 09:55:20 -- common/autotest_common.sh@955 -- # kill 72848 00:17:31.519 09:55:20 -- common/autotest_common.sh@960 -- # wait 72848 00:17:38.099 09:55:26 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:42.297 262144+0 records in 00:17:42.297 262144+0 records out 00:17:42.297 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.24938 s, 253 MB/s 00:17:42.297 09:55:30 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:44.215 09:55:32 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.215 [2024-12-15 09:55:32.832409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.215 [2024-12-15 09:55:32.832552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73102 ] 00:17:44.215 [2024-12-15 09:55:32.982218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.215 [2024-12-15 09:55:33.130068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.474 [2024-12-15 09:55:33.333220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.474 [2024-12-15 09:55:33.333274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.474 [2024-12-15 09:55:33.475789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.474 [2024-12-15 09:55:33.475823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.474 [2024-12-15 09:55:33.475833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.474 [2024-12-15 09:55:33.475841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.474 [2024-12-15 09:55:33.475873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.474 [2024-12-15 09:55:33.475881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.474 [2024-12-15 09:55:33.475887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:44.474 [2024-12-15 09:55:33.475892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.474 [2024-12-15 09:55:33.475904] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.474 [2024-12-15 09:55:33.476503] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.474 [2024-12-15 09:55:33.476525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.474 [2024-12-15 09:55:33.476531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.474 [2024-12-15 09:55:33.476538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:17:44.474 [2024-12-15 09:55:33.476543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.474 [2024-12-15 09:55:33.477446] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.474 [2024-12-15 09:55:33.487421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.474 [2024-12-15 09:55:33.487449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.474 [2024-12-15 09:55:33.487458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.976 ms 00:17:44.474 [2024-12-15 09:55:33.487464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.475 [2024-12-15 09:55:33.487503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.475 [2024-12-15 09:55:33.487510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.475 [2024-12-15 09:55:33.487517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.475 [2024-12-15 09:55:33.487523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.491847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.491872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.735 [2024-12-15 09:55:33.491879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.281 ms 00:17:44.735 [2024-12-15 09:55:33.491884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.491946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.491952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.735 [2024-12-15 09:55:33.491958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:44.735 [2024-12-15 09:55:33.491964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.491996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.492003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.735 [2024-12-15 09:55:33.492010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.735 [2024-12-15 09:55:33.492015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.492034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.735 [2024-12-15 09:55:33.494796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.494813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.735 [2024-12-15 09:55:33.494820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:17:44.735 [2024-12-15 09:55:33.494825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.494852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.494859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.735 [2024-12-15 09:55:33.494865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.735 [2024-12-15 09:55:33.494872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.494885] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.735 [2024-12-15 09:55:33.494900] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:44.735 [2024-12-15 09:55:33.494925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.735 [2024-12-15 09:55:33.494936] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:44.735 [2024-12-15 09:55:33.494993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:44.735 [2024-12-15 09:55:33.495001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.735 [2024-12-15 09:55:33.495010] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:44.735 [2024-12-15 09:55:33.495018] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495024] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495031] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.735 [2024-12-15 09:55:33.495037] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.735 [2024-12-15 09:55:33.495043] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:44.735 [2024-12-15 09:55:33.495048] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:44.735 [2024-12-15 09:55:33.495053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.495059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.735 [2024-12-15 09:55:33.495064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:17:44.735 [2024-12-15 09:55:33.495070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.495115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.735 [2024-12-15 09:55:33.495122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.735 [2024-12-15 09:55:33.495127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:44.735 [2024-12-15 09:55:33.495132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.735 [2024-12-15 09:55:33.495184] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.735 [2024-12-15 09:55:33.495191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.735 [2024-12-15 09:55:33.495197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495209] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.735 [2024-12-15 09:55:33.495214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.735 [2024-12-15 09:55:33.495229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.735 [2024-12-15 09:55:33.495240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.735 [2024-12-15 09:55:33.495245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.735 [2024-12-15 09:55:33.495250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.735 [2024-12-15 09:55:33.495269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.735 [2024-12-15 09:55:33.495274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:44.735 [2024-12-15 09:55:33.495279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.735 [2024-12-15 09:55:33.495294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:44.735 [2024-12-15 09:55:33.495299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:44.735 [2024-12-15 09:55:33.495309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:44.735 [2024-12-15 09:55:33.495314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.735 [2024-12-15 09:55:33.495324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.735 [2024-12-15 09:55:33.495339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.735 [2024-12-15 09:55:33.495353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.735 [2024-12-15 09:55:33.495368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.735 [2024-12-15 09:55:33.495382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.735 [2024-12-15 09:55:33.495392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.735 [2024-12-15 09:55:33.495397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:44.735 [2024-12-15 09:55:33.495402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.735 [2024-12-15 09:55:33.495407] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.735 [2024-12-15 09:55:33.495415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.735 [2024-12-15 09:55:33.495420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.735 [2024-12-15 09:55:33.495432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.735 [2024-12-15 09:55:33.495437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.735 [2024-12-15 09:55:33.495442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.735 [2024-12-15 09:55:33.495447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.735 [2024-12-15 09:55:33.495452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.735 [2024-12-15 09:55:33.495457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.735 [2024-12-15 09:55:33.495463] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.735 [2024-12-15 09:55:33.495470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.735 [2024-12-15 09:55:33.495477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.735 [2024-12-15 09:55:33.495482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:44.736 [2024-12-15 09:55:33.495487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:44.736 [2024-12-15 09:55:33.495493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:44.736 [2024-12-15 09:55:33.495498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:44.736 [2024-12-15 09:55:33.495503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:44.736 [2024-12-15 09:55:33.495508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:44.736 [2024-12-15 09:55:33.495513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:44.736 [2024-12-15 09:55:33.495518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:44.736 [2024-12-15 09:55:33.495524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:44.736 [2024-12-15 09:55:33.495529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:44.736 [2024-12-15 09:55:33.495534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:44.736 [2024-12-15 09:55:33.495540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:44.736 [2024-12-15 09:55:33.495545] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.736 [2024-12-15 09:55:33.495550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.736 [2024-12-15 09:55:33.495557] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.736 [2024-12-15 09:55:33.495562] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.736 [2024-12-15 09:55:33.495568] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.736 [2024-12-15 09:55:33.495573] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.736 [2024-12-15 09:55:33.495579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.495585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.736 [2024-12-15 09:55:33.495591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:17:44.736 [2024-12-15 09:55:33.495596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.507461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.507487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.736 [2024-12-15 09:55:33.507495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.838 ms 00:17:44.736 [2024-12-15 09:55:33.507503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.507566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.507572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.736 [2024-12-15 09:55:33.507578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:44.736 [2024-12-15 09:55:33.507583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.543169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.543202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.736 [2024-12-15 09:55:33.543212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.552 ms 00:17:44.736 [2024-12-15 09:55:33.543219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.543243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.543251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.736 [2024-12-15 09:55:33.543268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:44.736 [2024-12-15 09:55:33.543275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.543579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.543604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.736 [2024-12-15 09:55:33.543611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:44.736 [2024-12-15 09:55:33.543620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.543707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.543716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.736 [2024-12-15 09:55:33.543723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:44.736 [2024-12-15 09:55:33.543728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.554653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.554677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.736 [2024-12-15 09:55:33.554684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.909 ms 00:17:44.736 [2024-12-15 09:55:33.554690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.564469] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:44.736 [2024-12-15 09:55:33.564498] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.736 [2024-12-15 09:55:33.564506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.564511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.736 [2024-12-15 09:55:33.564518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.749 ms 00:17:44.736 [2024-12-15 09:55:33.564524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.583100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.583137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.736 [2024-12-15 09:55:33.583146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:17:44.736 [2024-12-15 09:55:33.583151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.592306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.592331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.736 [2024-12-15 09:55:33.592338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.127 ms 00:17:44.736 [2024-12-15 09:55:33.592343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.601047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.601071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.736 [2024-12-15 09:55:33.601083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.678 ms 00:17:44.736 [2024-12-15 09:55:33.601088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.601360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.601369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.736 [2024-12-15 09:55:33.601375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:17:44.736 [2024-12-15 09:55:33.601381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.646469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.646502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.736 [2024-12-15 09:55:33.646512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.077 ms 00:17:44.736 [2024-12-15 09:55:33.646518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.654493] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:44.736 [2024-12-15 09:55:33.656097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.656120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.736 [2024-12-15 09:55:33.656128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:17:44.736 [2024-12-15 09:55:33.656135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.656182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.656190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.736 [2024-12-15 09:55:33.656196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.736 [2024-12-15 09:55:33.656202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.656242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.656249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.736 [2024-12-15 09:55:33.656267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:44.736 [2024-12-15 09:55:33.656272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.657182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.657209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:44.736 [2024-12-15 09:55:33.657216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:17:44.736 [2024-12-15 09:55:33.657221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.657243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.657249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.736 [2024-12-15 09:55:33.657269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.736 [2024-12-15 09:55:33.657279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.657312] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.736 [2024-12-15 09:55:33.657320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.657325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.736 [2024-12-15 09:55:33.657334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.736 [2024-12-15 09:55:33.657339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.675725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.736 [2024-12-15 09:55:33.675751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.736 [2024-12-15 09:55:33.675759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.373 ms 00:17:44.736 [2024-12-15 09:55:33.675765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.736 [2024-12-15 09:55:33.675816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.737 [2024-12-15 09:55:33.675827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.737 [2024-12-15 09:55:33.675833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:44.737 [2024-12-15 09:55:33.675838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.737 [2024-12-15 09:55:33.676526] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 200.405 ms, result 0 00:17:46.109  [2024-12-15T09:55:35.690Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-15T09:55:37.156Z] Copying: 58/1024 [MB] (29 MBps) [2024-12-15T09:55:37.722Z] Copying: 77/1024 [MB] (18 MBps) [2024-12-15T09:55:39.169Z] Copying: 98/1024 [MB] (21 MBps) [2024-12-15T09:55:39.733Z] Copying: 141/1024 [MB] (43 MBps) [2024-12-15T09:55:41.104Z] Copying: 168/1024 [MB] (26 MBps) [2024-12-15T09:55:42.035Z] Copying: 197/1024 [MB] (28 MBps) [2024-12-15T09:55:42.978Z] Copying: 228/1024 [MB] (30 MBps) [2024-12-15T09:55:43.915Z] Copying: 249/1024 [MB] (21 MBps) [2024-12-15T09:55:44.856Z] Copying: 267/1024 [MB] (18 MBps) [2024-12-15T09:55:45.798Z] Copying: 306/1024 [MB] (39 MBps) [2024-12-15T09:55:46.744Z] Copying: 318/1024 [MB] (11 MBps) [2024-12-15T09:55:48.129Z] Copying: 331/1024 [MB] (12 MBps) [2024-12-15T09:55:48.702Z] Copying: 349/1024 [MB] (17 MBps) [2024-12-15T09:55:50.087Z] Copying: 367/1024 [MB] (18 MBps) [2024-12-15T09:55:51.027Z] Copying: 388/1024 [MB] (20 MBps) [2024-12-15T09:55:51.972Z] Copying: 401/1024 [MB] (13 MBps) [2024-12-15T09:55:52.904Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-15T09:55:53.843Z] Copying: 439/1024 [MB] (27 MBps) [2024-12-15T09:55:54.781Z] Copying: 458/1024 [MB] (18 MBps) [2024-12-15T09:55:55.717Z] Copying: 479/1024 [MB] (21 MBps) [2024-12-15T09:55:57.087Z] Copying: 518/1024 [MB] (38 MBps) [2024-12-15T09:55:58.019Z] Copying: 543/1024 [MB] (25 MBps) [2024-12-15T09:55:58.952Z] Copying: 567/1024 [MB] (24 MBps) [2024-12-15T09:55:59.884Z] Copying: 594/1024 [MB] (26 MBps) [2024-12-15T09:56:00.818Z] Copying: 621/1024 [MB] (27 MBps) [2024-12-15T09:56:01.753Z] Copying: 659/1024 [MB] (37 MBps) [2024-12-15T09:56:02.696Z] Copying: 700/1024 [MB] (41 MBps) [2024-12-15T09:56:04.082Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-15T09:56:05.023Z] Copying: 728/1024 [MB] (17 MBps) [2024-12-15T09:56:05.961Z] Copying: 746/1024 [MB] (17 MBps) [2024-12-15T09:56:06.907Z] Copying: 757/1024 [MB] (10 MBps) [2024-12-15T09:56:07.848Z] Copying: 774/1024 [MB] (17 MBps) [2024-12-15T09:56:08.790Z] Copying: 795/1024 [MB] (21 MBps) [2024-12-15T09:56:09.731Z] Copying: 819/1024 [MB] (23 MBps) [2024-12-15T09:56:11.112Z] Copying: 838/1024 [MB] (19 MBps) [2024-12-15T09:56:12.056Z] Copying: 853/1024 [MB] (14 MBps) [2024-12-15T09:56:12.999Z] Copying: 870/1024 [MB] (17 MBps) [2024-12-15T09:56:13.938Z] Copying: 894/1024 [MB] (23 MBps) [2024-12-15T09:56:14.879Z] Copying: 914/1024 [MB] (19 MBps) [2024-12-15T09:56:15.822Z] Copying: 936/1024 [MB] (22 MBps) [2024-12-15T09:56:16.765Z] Copying: 957/1024 [MB] (20 MBps) [2024-12-15T09:56:17.703Z] Copying: 976/1024 [MB] (19 MBps) [2024-12-15T09:56:19.085Z] Copying: 1002/1024 [MB] (26 MBps) [2024-12-15T09:56:19.085Z] Copying: 1020/1024 [MB] (18 MBps) [2024-12-15T09:56:19.085Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-15 09:56:18.939811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:18.939876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.069 [2024-12-15 09:56:18.939891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:30.069 [2024-12-15 09:56:18.939900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:18.939922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.069 [2024-12-15 09:56:18.943136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:18.943326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.069 [2024-12-15 09:56:18.943359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:18:30.069 [2024-12-15 09:56:18.943367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:18.946389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:18.946554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.069 [2024-12-15 09:56:18.946574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:18:30.069 [2024-12-15 09:56:18.946582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:18.965711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:18.965896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.069 [2024-12-15 09:56:18.965919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.106 ms 00:18:30.069 [2024-12-15 09:56:18.965936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:18.972041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:18.972083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:30.069 [2024-12-15 09:56:18.972095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:18:30.069 [2024-12-15 09:56:18.972103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:18.999985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:19.000198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.069 [2024-12-15 09:56:19.000222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.802 ms 00:18:30.069 [2024-12-15 09:56:19.000231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:19.016425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:19.016474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.069 [2024-12-15 09:56:19.016489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.128 ms 00:18:30.069 [2024-12-15 09:56:19.016498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:19.016684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:19.016697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.069 [2024-12-15 09:56:19.016707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:30.069 [2024-12-15 09:56:19.016715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:19.043492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:19.043540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:30.069 [2024-12-15 09:56:19.043554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.761 ms 00:18:30.069 [2024-12-15 09:56:19.043560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.069 [2024-12-15 09:56:19.070015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.069 [2024-12-15 09:56:19.070063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:30.069 [2024-12-15 09:56:19.070075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.406 ms 00:18:30.069 [2024-12-15 09:56:19.070095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.332 [2024-12-15 09:56:19.096007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.332 [2024-12-15 09:56:19.096056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.332 [2024-12-15 09:56:19.096067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.863 ms 00:18:30.332 [2024-12-15 09:56:19.096074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.332 [2024-12-15 09:56:19.121875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.332 [2024-12-15 09:56:19.122066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.332 [2024-12-15 09:56:19.122087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.693 ms 00:18:30.332 [2024-12-15 09:56:19.122093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.332 [2024-12-15 09:56:19.122180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.332 [2024-12-15 09:56:19.122197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.332 [2024-12-15 09:56:19.122370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.122998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.333 [2024-12-15 09:56:19.123015] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.333 [2024-12-15 09:56:19.123024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47932767-df3c-47cb-a32e-4820bc91e495 00:18:30.333 [2024-12-15 09:56:19.123031] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.333 [2024-12-15 09:56:19.123039] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.333 [2024-12-15 09:56:19.123046] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.334 [2024-12-15 09:56:19.123053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.334 [2024-12-15 09:56:19.123061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.334 [2024-12-15 09:56:19.123068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.334 [2024-12-15 09:56:19.123075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.334 [2024-12-15 09:56:19.123082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.334 [2024-12-15 09:56:19.123098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.334 [2024-12-15 09:56:19.123105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.334 [2024-12-15 09:56:19.123112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.334 [2024-12-15 09:56:19.123121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:18:30.334 [2024-12-15 09:56:19.123131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.137164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.334 [2024-12-15 09:56:19.137347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.334 [2024-12-15 09:56:19.137408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:18:30.334 [2024-12-15 09:56:19.137431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.137671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.334 [2024-12-15 09:56:19.137796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.334 [2024-12-15 09:56:19.137830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:30.334 [2024-12-15 09:56:19.137848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.177289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.177471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.334 [2024-12-15 09:56:19.177531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.177552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.177631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.177653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.334 [2024-12-15 09:56:19.177681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.177700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.177794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.177885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.334 [2024-12-15 09:56:19.177910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.177929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.177960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.177981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.334 [2024-12-15 09:56:19.178009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.178032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.259762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.259988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.334 [2024-12-15 09:56:19.260055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.260079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.292462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.292648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.334 [2024-12-15 09:56:19.292708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.292738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.292825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.292849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.334 [2024-12-15 09:56:19.292871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.292889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.292942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.292965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.334 [2024-12-15 09:56:19.292986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.293072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.293206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.293231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.334 [2024-12-15 09:56:19.293280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.293303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.293352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.293375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.334 [2024-12-15 09:56:19.293490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.293514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.293578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.293601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.334 [2024-12-15 09:56:19.293620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.293785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.293853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.334 [2024-12-15 09:56:19.293877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.334 [2024-12-15 09:56:19.294001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.334 [2024-12-15 09:56:19.294061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.334 [2024-12-15 09:56:19.294224] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.374 ms, result 0 00:18:31.722 00:18:31.722 00:18:31.722 09:56:20 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:31.722 [2024-12-15 09:56:20.495938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:31.722 [2024-12-15 09:56:20.496344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73596 ] 00:18:31.722 [2024-12-15 09:56:20.647557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.983 [2024-12-15 09:56:20.866538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.245 [2024-12-15 09:56:21.154838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:32.245 [2024-12-15 09:56:21.154919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:32.507 [2024-12-15 09:56:21.308550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.507 [2024-12-15 09:56:21.308626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:32.507 [2024-12-15 09:56:21.308643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:32.507 [2024-12-15 09:56:21.308655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.507 [2024-12-15 09:56:21.308714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.507 [2024-12-15 09:56:21.308725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.507 [2024-12-15 09:56:21.308733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:32.507 [2024-12-15 09:56:21.308741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.507 [2024-12-15 09:56:21.308762] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:32.507 [2024-12-15 09:56:21.309577] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:32.508 [2024-12-15 09:56:21.309598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.309606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.508 [2024-12-15 09:56:21.309616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:18:32.508 [2024-12-15 09:56:21.309624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.311377] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:32.508 [2024-12-15 09:56:21.325567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.325637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:32.508 [2024-12-15 09:56:21.325652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.194 ms 00:18:32.508 [2024-12-15 09:56:21.325661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.325744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.325754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:32.508 [2024-12-15 09:56:21.325763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:32.508 [2024-12-15 09:56:21.325771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.334322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.334366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.508 [2024-12-15 09:56:21.334377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.469 ms 00:18:32.508 [2024-12-15 09:56:21.334385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.334485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.334494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.508 [2024-12-15 09:56:21.334503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:32.508 [2024-12-15 09:56:21.334512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.334560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.334569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:32.508 [2024-12-15 09:56:21.334577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:32.508 [2024-12-15 09:56:21.334585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.334615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:32.508 [2024-12-15 09:56:21.339067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.339107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.508 [2024-12-15 09:56:21.339118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:18:32.508 [2024-12-15 09:56:21.339125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.339168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.339176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:32.508 [2024-12-15 09:56:21.339185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:32.508 [2024-12-15 09:56:21.339195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.339270] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:32.508 [2024-12-15 09:56:21.339295] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:32.508 [2024-12-15 09:56:21.339330] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:32.508 [2024-12-15 09:56:21.339346] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:32.508 [2024-12-15 09:56:21.339422] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:32.508 [2024-12-15 09:56:21.339433] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:32.508 [2024-12-15 09:56:21.339446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:32.508 [2024-12-15 09:56:21.339457] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339465] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339474] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:32.508 [2024-12-15 09:56:21.339481] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:32.508 [2024-12-15 09:56:21.339490] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:32.508 [2024-12-15 09:56:21.339498] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:32.508 [2024-12-15 09:56:21.339506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.339513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:32.508 [2024-12-15 09:56:21.339521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:18:32.508 [2024-12-15 09:56:21.339529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.339594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.508 [2024-12-15 09:56:21.339602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:32.508 [2024-12-15 09:56:21.339610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:32.508 [2024-12-15 09:56:21.339617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.508 [2024-12-15 09:56:21.339689] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:32.508 [2024-12-15 09:56:21.339699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:32.508 [2024-12-15 09:56:21.339707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:32.508 [2024-12-15 09:56:21.339729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:32.508 [2024-12-15 09:56:21.339753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.508 [2024-12-15 09:56:21.339767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:32.508 [2024-12-15 09:56:21.339774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:32.508 [2024-12-15 09:56:21.339782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:32.508 [2024-12-15 09:56:21.339790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:32.508 [2024-12-15 09:56:21.339796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:32.508 [2024-12-15 09:56:21.339803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:32.508 [2024-12-15 09:56:21.339824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:32.508 [2024-12-15 09:56:21.339830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:32.508 [2024-12-15 09:56:21.339843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:32.508 [2024-12-15 09:56:21.339850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:32.508 [2024-12-15 09:56:21.339863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:32.508 [2024-12-15 09:56:21.339883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:32.508 [2024-12-15 09:56:21.339903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:32.508 [2024-12-15 09:56:21.339922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:32.508 [2024-12-15 09:56:21.339941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:32.508 [2024-12-15 09:56:21.339947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.508 [2024-12-15 09:56:21.339953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:32.508 [2024-12-15 09:56:21.339960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:32.508 [2024-12-15 09:56:21.339967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:32.508 [2024-12-15 09:56:21.339973] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:32.508 [2024-12-15 09:56:21.339983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:32.508 [2024-12-15 09:56:21.339990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:32.508 [2024-12-15 09:56:21.339998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:32.508 [2024-12-15 09:56:21.340009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:32.508 [2024-12-15 09:56:21.340016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:32.508 [2024-12-15 09:56:21.340023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:32.508 [2024-12-15 09:56:21.340030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:32.508 [2024-12-15 09:56:21.340037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:32.508 [2024-12-15 09:56:21.340044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:32.509 [2024-12-15 09:56:21.340051] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:32.509 [2024-12-15 09:56:21.340060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.509 [2024-12-15 09:56:21.340069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:32.509 [2024-12-15 09:56:21.340077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:32.509 [2024-12-15 09:56:21.340084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:32.509 [2024-12-15 09:56:21.340091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:32.509 [2024-12-15 09:56:21.340098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:32.509 [2024-12-15 09:56:21.340106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:32.509 [2024-12-15 09:56:21.340113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:32.509 [2024-12-15 09:56:21.340120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:32.509 [2024-12-15 09:56:21.340127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:32.509 [2024-12-15 09:56:21.340134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:32.509 [2024-12-15 09:56:21.340141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:32.509 [2024-12-15 09:56:21.340149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:32.509 [2024-12-15 09:56:21.340157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:32.509 [2024-12-15 09:56:21.340165] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:32.509 [2024-12-15 09:56:21.340173] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:32.509 [2024-12-15 09:56:21.340181] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:32.509 [2024-12-15 09:56:21.340188] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:32.509 [2024-12-15 09:56:21.340195] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:32.509 [2024-12-15 09:56:21.340203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:32.509 [2024-12-15 09:56:21.340212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.340219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:32.509 [2024-12-15 09:56:21.340227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:18:32.509 [2024-12-15 09:56:21.340233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.359018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.359216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.509 [2024-12-15 09:56:21.359318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.728 ms 00:18:32.509 [2024-12-15 09:56:21.359352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.359467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.359490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:32.509 [2024-12-15 09:56:21.359546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:32.509 [2024-12-15 09:56:21.359569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.406491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.406698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.509 [2024-12-15 09:56:21.406772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.846 ms 00:18:32.509 [2024-12-15 09:56:21.406796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.406864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.406890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.509 [2024-12-15 09:56:21.406910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:32.509 [2024-12-15 09:56:21.406930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.407559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.407721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.509 [2024-12-15 09:56:21.407781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:32.509 [2024-12-15 09:56:21.407812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.407968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.407993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.509 [2024-12-15 09:56:21.408057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:32.509 [2024-12-15 09:56:21.408079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.424955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.425121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.509 [2024-12-15 09:56:21.425181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.838 ms 00:18:32.509 [2024-12-15 09:56:21.425204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.440005] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:32.509 [2024-12-15 09:56:21.440201] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:32.509 [2024-12-15 09:56:21.440306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.440329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:32.509 [2024-12-15 09:56:21.440351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.945 ms 00:18:32.509 [2024-12-15 09:56:21.440369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.466748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.466925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:32.509 [2024-12-15 09:56:21.466989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.322 ms 00:18:32.509 [2024-12-15 09:56:21.467013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.480630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.480803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:32.509 [2024-12-15 09:56:21.480862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.459 ms 00:18:32.509 [2024-12-15 09:56:21.480884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.494219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.494413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:32.509 [2024-12-15 09:56:21.494475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.208 ms 00:18:32.509 [2024-12-15 09:56:21.494497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.509 [2024-12-15 09:56:21.494900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.509 [2024-12-15 09:56:21.494938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:32.509 [2024-12-15 09:56:21.495033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:18:32.509 [2024-12-15 09:56:21.495057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.562504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.562565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:32.771 [2024-12-15 09:56:21.562581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.413 ms 00:18:32.771 [2024-12-15 09:56:21.562589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.574100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:32.771 [2024-12-15 09:56:21.577455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.577503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:32.771 [2024-12-15 09:56:21.577516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.800 ms 00:18:32.771 [2024-12-15 09:56:21.577532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.577610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.577621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:32.771 [2024-12-15 09:56:21.577630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:32.771 [2024-12-15 09:56:21.577639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.577705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.577716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:32.771 [2024-12-15 09:56:21.577725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:32.771 [2024-12-15 09:56:21.577733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.579122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.579171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:32.771 [2024-12-15 09:56:21.579182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:18:32.771 [2024-12-15 09:56:21.579190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.579227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.579236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:32.771 [2024-12-15 09:56:21.579265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:32.771 [2024-12-15 09:56:21.579274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.579312] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:32.771 [2024-12-15 09:56:21.579323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.579334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:32.771 [2024-12-15 09:56:21.579342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:32.771 [2024-12-15 09:56:21.579349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.606212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.606278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:32.771 [2024-12-15 09:56:21.606292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:18:32.771 [2024-12-15 09:56:21.606301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.606396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.771 [2024-12-15 09:56:21.606408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:32.771 [2024-12-15 09:56:21.606417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:32.771 [2024-12-15 09:56:21.606425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.771 [2024-12-15 09:56:21.607822] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.787 ms, result 0 00:18:34.189  [2024-12-15T09:56:23.826Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-15T09:56:25.214Z] Copying: 25/1024 [MB] (10 MBps) [2024-12-15T09:56:26.157Z] Copying: 39/1024 [MB] (14 MBps) [2024-12-15T09:56:27.102Z] Copying: 55/1024 [MB] (15 MBps) [2024-12-15T09:56:28.045Z] Copying: 72/1024 [MB] (17 MBps) [2024-12-15T09:56:28.988Z] Copying: 90/1024 [MB] (17 MBps) [2024-12-15T09:56:29.932Z] Copying: 105/1024 [MB] (15 MBps) [2024-12-15T09:56:30.872Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-15T09:56:31.816Z] Copying: 133/1024 [MB] (12 MBps) [2024-12-15T09:56:33.202Z] Copying: 148/1024 [MB] (15 MBps) [2024-12-15T09:56:34.148Z] Copying: 168/1024 [MB] (19 MBps) [2024-12-15T09:56:35.091Z] Copying: 181/1024 [MB] (12 MBps) [2024-12-15T09:56:36.028Z] Copying: 192/1024 [MB] (10 MBps) [2024-12-15T09:56:36.972Z] Copying: 204/1024 [MB] (12 MBps) [2024-12-15T09:56:37.917Z] Copying: 218/1024 [MB] (14 MBps) [2024-12-15T09:56:38.858Z] Copying: 234/1024 [MB] (16 MBps) [2024-12-15T09:56:39.796Z] Copying: 244/1024 [MB] (10 MBps) [2024-12-15T09:56:41.179Z] Copying: 260/1024 [MB] (15 MBps) [2024-12-15T09:56:42.122Z] Copying: 270/1024 [MB] (10 MBps) [2024-12-15T09:56:43.063Z] Copying: 284/1024 [MB] (13 MBps) [2024-12-15T09:56:44.005Z] Copying: 297/1024 [MB] (12 MBps) [2024-12-15T09:56:44.950Z] Copying: 307/1024 [MB] (10 MBps) [2024-12-15T09:56:45.961Z] Copying: 322/1024 [MB] (14 MBps) [2024-12-15T09:56:46.930Z] Copying: 338/1024 [MB] (16 MBps) [2024-12-15T09:56:47.874Z] Copying: 350/1024 [MB] (12 MBps) [2024-12-15T09:56:48.818Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-15T09:56:50.207Z] Copying: 372/1024 [MB] (11 MBps) [2024-12-15T09:56:51.149Z] Copying: 388/1024 [MB] (15 MBps) [2024-12-15T09:56:52.095Z] Copying: 402/1024 [MB] (13 MBps) [2024-12-15T09:56:53.040Z] Copying: 413/1024 [MB] (10 MBps) [2024-12-15T09:56:53.983Z] Copying: 432/1024 [MB] (19 MBps) [2024-12-15T09:56:54.929Z] Copying: 442/1024 [MB] (10 MBps) [2024-12-15T09:56:55.874Z] Copying: 454/1024 [MB] (11 MBps) [2024-12-15T09:56:56.819Z] Copying: 464/1024 [MB] (10 MBps) [2024-12-15T09:56:58.209Z] Copying: 476/1024 [MB] (11 MBps) [2024-12-15T09:56:59.154Z] Copying: 487/1024 [MB] (10 MBps) [2024-12-15T09:57:00.099Z] Copying: 497/1024 [MB] (10 MBps) [2024-12-15T09:57:01.043Z] Copying: 508/1024 [MB] (10 MBps) [2024-12-15T09:57:01.986Z] Copying: 518/1024 [MB] (10 MBps) [2024-12-15T09:57:02.930Z] Copying: 529/1024 [MB] (10 MBps) [2024-12-15T09:57:03.873Z] Copying: 546/1024 [MB] (17 MBps) [2024-12-15T09:57:04.817Z] Copying: 570/1024 [MB] (23 MBps) [2024-12-15T09:57:06.202Z] Copying: 590/1024 [MB] (20 MBps) [2024-12-15T09:57:07.147Z] Copying: 610/1024 [MB] (20 MBps) [2024-12-15T09:57:08.092Z] Copying: 632/1024 [MB] (21 MBps) [2024-12-15T09:57:09.088Z] Copying: 646/1024 [MB] (13 MBps) [2024-12-15T09:57:10.029Z] Copying: 665/1024 [MB] (18 MBps) [2024-12-15T09:57:10.971Z] Copying: 680/1024 [MB] (15 MBps) [2024-12-15T09:57:11.917Z] Copying: 699/1024 [MB] (19 MBps) [2024-12-15T09:57:12.862Z] Copying: 713/1024 [MB] (13 MBps) [2024-12-15T09:57:13.807Z] Copying: 726/1024 [MB] (12 MBps) [2024-12-15T09:57:15.195Z] Copying: 743/1024 [MB] (16 MBps) [2024-12-15T09:57:16.141Z] Copying: 759/1024 [MB] (15 MBps) [2024-12-15T09:57:17.135Z] Copying: 770/1024 [MB] (11 MBps) [2024-12-15T09:57:18.089Z] Copying: 783/1024 [MB] (13 MBps) [2024-12-15T09:57:19.034Z] Copying: 795/1024 [MB] (11 MBps) [2024-12-15T09:57:19.980Z] Copying: 806/1024 [MB] (10 MBps) [2024-12-15T09:57:20.927Z] Copying: 816/1024 [MB] (10 MBps) [2024-12-15T09:57:21.872Z] Copying: 846552/1048576 [kB] (10216 kBps) [2024-12-15T09:57:22.816Z] Copying: 836/1024 [MB] (10 MBps) [2024-12-15T09:57:24.206Z] Copying: 847/1024 [MB] (10 MBps) [2024-12-15T09:57:25.150Z] Copying: 857/1024 [MB] (10 MBps) [2024-12-15T09:57:26.090Z] Copying: 868/1024 [MB] (10 MBps) [2024-12-15T09:57:27.034Z] Copying: 887/1024 [MB] (19 MBps) [2024-12-15T09:57:27.974Z] Copying: 899/1024 [MB] (11 MBps) [2024-12-15T09:57:28.918Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-15T09:57:29.862Z] Copying: 930/1024 [MB] (20 MBps) [2024-12-15T09:57:30.804Z] Copying: 946/1024 [MB] (15 MBps) [2024-12-15T09:57:32.188Z] Copying: 960/1024 [MB] (14 MBps) [2024-12-15T09:57:33.132Z] Copying: 976/1024 [MB] (15 MBps) [2024-12-15T09:57:34.079Z] Copying: 989/1024 [MB] (12 MBps) [2024-12-15T09:57:35.022Z] Copying: 1008/1024 [MB] (19 MBps) [2024-12-15T09:57:35.022Z] Copying: 1022/1024 [MB] (14 MBps) [2024-12-15T09:57:35.022Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-15 09:57:34.947003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.006 [2024-12-15 09:57:34.947078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:46.006 [2024-12-15 09:57:34.947098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.006 [2024-12-15 09:57:34.947109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.006 [2024-12-15 09:57:34.947139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.006 [2024-12-15 09:57:34.950987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.006 [2024-12-15 09:57:34.951038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:46.006 [2024-12-15 09:57:34.951052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:19:46.006 [2024-12-15 09:57:34.951064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.006 [2024-12-15 09:57:34.951393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.006 [2024-12-15 09:57:34.951437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:46.006 [2024-12-15 09:57:34.951450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:46.006 [2024-12-15 09:57:34.951461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.006 [2024-12-15 09:57:34.956444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.006 [2024-12-15 09:57:34.956473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:46.006 [2024-12-15 09:57:34.956490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:19:46.007 [2024-12-15 09:57:34.956501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.007 [2024-12-15 09:57:34.961233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.007 [2024-12-15 09:57:34.961274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:46.007 [2024-12-15 09:57:34.961282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.713 ms 00:19:46.007 [2024-12-15 09:57:34.961289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.007 [2024-12-15 09:57:34.981119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.007 [2024-12-15 09:57:34.981149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:46.007 [2024-12-15 09:57:34.981158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.780 ms 00:19:46.007 [2024-12-15 09:57:34.981164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.007 [2024-12-15 09:57:35.001949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.007 [2024-12-15 09:57:35.002006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:46.007 [2024-12-15 09:57:35.002020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.755 ms 00:19:46.007 [2024-12-15 09:57:35.002036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.007 [2024-12-15 09:57:35.002187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.007 [2024-12-15 09:57:35.002199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:46.007 [2024-12-15 09:57:35.002215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:46.007 [2024-12-15 09:57:35.002221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.269 [2024-12-15 09:57:35.026038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.269 [2024-12-15 09:57:35.026073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:46.269 [2024-12-15 09:57:35.026084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.803 ms 00:19:46.269 [2024-12-15 09:57:35.026091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.269 [2024-12-15 09:57:35.050050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.269 [2024-12-15 09:57:35.050081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:46.269 [2024-12-15 09:57:35.050099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.925 ms 00:19:46.269 [2024-12-15 09:57:35.050106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.269 [2024-12-15 09:57:35.073592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.269 [2024-12-15 09:57:35.073625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:46.269 [2024-12-15 09:57:35.073635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.453 ms 00:19:46.269 [2024-12-15 09:57:35.073642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.269 [2024-12-15 09:57:35.097105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.269 [2024-12-15 09:57:35.097138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:46.269 [2024-12-15 09:57:35.097148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:19:46.269 [2024-12-15 09:57:35.097155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.269 [2024-12-15 09:57:35.097191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:46.269 [2024-12-15 09:57:35.097210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:46.269 [2024-12-15 09:57:35.097572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:46.270 [2024-12-15 09:57:35.097988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:46.270 [2024-12-15 09:57:35.097997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47932767-df3c-47cb-a32e-4820bc91e495 00:19:46.270 [2024-12-15 09:57:35.098005] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:46.270 [2024-12-15 09:57:35.098012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:46.270 [2024-12-15 09:57:35.098020] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:46.270 [2024-12-15 09:57:35.098028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:46.270 [2024-12-15 09:57:35.098036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:46.270 [2024-12-15 09:57:35.098043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:46.270 [2024-12-15 09:57:35.098050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:46.270 [2024-12-15 09:57:35.098063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:46.270 [2024-12-15 09:57:35.098070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:46.270 [2024-12-15 09:57:35.098077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.270 [2024-12-15 09:57:35.098084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:46.270 [2024-12-15 09:57:35.098095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:19:46.270 [2024-12-15 09:57:35.098103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.111036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.270 [2024-12-15 09:57:35.111070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:46.270 [2024-12-15 09:57:35.111080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.904 ms 00:19:46.270 [2024-12-15 09:57:35.111091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.111329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.270 [2024-12-15 09:57:35.111345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:46.270 [2024-12-15 09:57:35.111353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:46.270 [2024-12-15 09:57:35.111360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.147160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.147202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.270 [2024-12-15 09:57:35.147213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.147222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.147295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.147309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.270 [2024-12-15 09:57:35.147317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.147325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.147392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.147402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.270 [2024-12-15 09:57:35.147410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.147417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.147432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.147439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.270 [2024-12-15 09:57:35.147450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.147457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.224660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.224709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.270 [2024-12-15 09:57:35.224720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.224729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.255437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.255484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.270 [2024-12-15 09:57:35.255503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.270 [2024-12-15 09:57:35.255511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.270 [2024-12-15 09:57:35.255581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.270 [2024-12-15 09:57:35.255592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.270 [2024-12-15 09:57:35.255601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.255609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.255652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.271 [2024-12-15 09:57:35.255663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.271 [2024-12-15 09:57:35.255672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.255684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.255787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.271 [2024-12-15 09:57:35.255798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.271 [2024-12-15 09:57:35.255806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.255814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.255847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.271 [2024-12-15 09:57:35.255857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:46.271 [2024-12-15 09:57:35.255865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.255873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.255919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.271 [2024-12-15 09:57:35.255928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.271 [2024-12-15 09:57:35.255937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.255945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.255996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:46.271 [2024-12-15 09:57:35.256017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.271 [2024-12-15 09:57:35.256026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:46.271 [2024-12-15 09:57:35.256038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.271 [2024-12-15 09:57:35.256173] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.144 ms, result 0 00:19:47.212 00:19:47.212 00:19:47.212 09:57:36 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:49.757 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:49.757 09:57:38 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:49.757 [2024-12-15 09:57:38.503637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.757 [2024-12-15 09:57:38.503798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74407 ] 00:19:49.757 [2024-12-15 09:57:38.658364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.018 [2024-12-15 09:57:38.877485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.279 [2024-12-15 09:57:39.143992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.279 [2024-12-15 09:57:39.144079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.541 [2024-12-15 09:57:39.300509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.541 [2024-12-15 09:57:39.300573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.541 [2024-12-15 09:57:39.300602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:50.541 [2024-12-15 09:57:39.300614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.541 [2024-12-15 09:57:39.300672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.541 [2024-12-15 09:57:39.300684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.541 [2024-12-15 09:57:39.300693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:50.542 [2024-12-15 09:57:39.300702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.300723] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.542 [2024-12-15 09:57:39.301522] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.542 [2024-12-15 09:57:39.301552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.301563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.542 [2024-12-15 09:57:39.301572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:19:50.542 [2024-12-15 09:57:39.301579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.303297] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.542 [2024-12-15 09:57:39.318020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.318074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.542 [2024-12-15 09:57:39.318088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:19:50.542 [2024-12-15 09:57:39.318096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.318183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.318194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.542 [2024-12-15 09:57:39.318203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:50.542 [2024-12-15 09:57:39.318212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.326788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.326834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.542 [2024-12-15 09:57:39.326846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.473 ms 00:19:50.542 [2024-12-15 09:57:39.326854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.326956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.326967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.542 [2024-12-15 09:57:39.326977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:50.542 [2024-12-15 09:57:39.326986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.327038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.327048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.542 [2024-12-15 09:57:39.327057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:50.542 [2024-12-15 09:57:39.327065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.327098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.542 [2024-12-15 09:57:39.331337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.331380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.542 [2024-12-15 09:57:39.331391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:19:50.542 [2024-12-15 09:57:39.331399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.331441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.331450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.542 [2024-12-15 09:57:39.331458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:50.542 [2024-12-15 09:57:39.331469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.331523] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.542 [2024-12-15 09:57:39.331547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:50.542 [2024-12-15 09:57:39.331584] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.542 [2024-12-15 09:57:39.331601] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:50.542 [2024-12-15 09:57:39.331677] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:50.542 [2024-12-15 09:57:39.331687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.542 [2024-12-15 09:57:39.331701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:50.542 [2024-12-15 09:57:39.331713] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.542 [2024-12-15 09:57:39.331722] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.542 [2024-12-15 09:57:39.331730] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:50.542 [2024-12-15 09:57:39.331738] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.542 [2024-12-15 09:57:39.331746] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:50.542 [2024-12-15 09:57:39.331754] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:50.542 [2024-12-15 09:57:39.331768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.331775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.542 [2024-12-15 09:57:39.331783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:50.542 [2024-12-15 09:57:39.331791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.331855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.542 [2024-12-15 09:57:39.331865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.542 [2024-12-15 09:57:39.331873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:50.542 [2024-12-15 09:57:39.331881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.542 [2024-12-15 09:57:39.331951] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.542 [2024-12-15 09:57:39.331962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.542 [2024-12-15 09:57:39.331970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.542 [2024-12-15 09:57:39.331980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.542 [2024-12-15 09:57:39.331987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.542 [2024-12-15 09:57:39.331995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.542 [2024-12-15 09:57:39.332017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.542 [2024-12-15 09:57:39.332035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.542 [2024-12-15 09:57:39.332044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:50.542 [2024-12-15 09:57:39.332052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.542 [2024-12-15 09:57:39.332059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.542 [2024-12-15 09:57:39.332066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:50.542 [2024-12-15 09:57:39.332073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.542 [2024-12-15 09:57:39.332095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:50.542 [2024-12-15 09:57:39.332102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:50.542 [2024-12-15 09:57:39.332117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:50.542 [2024-12-15 09:57:39.332125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.542 [2024-12-15 09:57:39.332139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.542 [2024-12-15 09:57:39.332158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.542 [2024-12-15 09:57:39.332178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.542 [2024-12-15 09:57:39.332199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.542 [2024-12-15 09:57:39.332219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.542 [2024-12-15 09:57:39.332232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.542 [2024-12-15 09:57:39.332239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:50.542 [2024-12-15 09:57:39.332245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.542 [2024-12-15 09:57:39.332276] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.542 [2024-12-15 09:57:39.332287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.542 [2024-12-15 09:57:39.332296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.542 [2024-12-15 09:57:39.332306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.542 [2024-12-15 09:57:39.332315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.542 [2024-12-15 09:57:39.332322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.542 [2024-12-15 09:57:39.332329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.542 [2024-12-15 09:57:39.332337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.542 [2024-12-15 09:57:39.332344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.542 [2024-12-15 09:57:39.332351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.543 [2024-12-15 09:57:39.332359] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.543 [2024-12-15 09:57:39.332369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.543 [2024-12-15 09:57:39.332378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:50.543 [2024-12-15 09:57:39.332386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:50.543 [2024-12-15 09:57:39.332393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:50.543 [2024-12-15 09:57:39.332400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:50.543 [2024-12-15 09:57:39.332408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:50.543 [2024-12-15 09:57:39.332415] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:50.543 [2024-12-15 09:57:39.332422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:50.543 [2024-12-15 09:57:39.332430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:50.543 [2024-12-15 09:57:39.332437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:50.543 [2024-12-15 09:57:39.332445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:50.543 [2024-12-15 09:57:39.332452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:50.543 [2024-12-15 09:57:39.332459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:50.543 [2024-12-15 09:57:39.332467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:50.543 [2024-12-15 09:57:39.332475] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.543 [2024-12-15 09:57:39.332483] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.543 [2024-12-15 09:57:39.332491] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.543 [2024-12-15 09:57:39.332498] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.543 [2024-12-15 09:57:39.332506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.543 [2024-12-15 09:57:39.332513] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.543 [2024-12-15 09:57:39.332520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.332528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.543 [2024-12-15 09:57:39.332536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:19:50.543 [2024-12-15 09:57:39.332546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.351436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.351489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.543 [2024-12-15 09:57:39.351503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.814 ms 00:19:50.543 [2024-12-15 09:57:39.351518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.351614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.351623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.543 [2024-12-15 09:57:39.351633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:50.543 [2024-12-15 09:57:39.351640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.396389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.396452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.543 [2024-12-15 09:57:39.396465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.691 ms 00:19:50.543 [2024-12-15 09:57:39.396474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.396528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.396538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.543 [2024-12-15 09:57:39.396547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.543 [2024-12-15 09:57:39.396555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.397141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.397190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.543 [2024-12-15 09:57:39.397202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:50.543 [2024-12-15 09:57:39.397219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.397379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.397392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.543 [2024-12-15 09:57:39.397402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:50.543 [2024-12-15 09:57:39.397410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.414371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.414422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.543 [2024-12-15 09:57:39.414434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.939 ms 00:19:50.543 [2024-12-15 09:57:39.414443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.429519] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:50.543 [2024-12-15 09:57:39.429572] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.543 [2024-12-15 09:57:39.429586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.429594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.543 [2024-12-15 09:57:39.429605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.025 ms 00:19:50.543 [2024-12-15 09:57:39.429612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.456020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.456074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.543 [2024-12-15 09:57:39.456087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.347 ms 00:19:50.543 [2024-12-15 09:57:39.456096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.469582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.469632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.543 [2024-12-15 09:57:39.469644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.422 ms 00:19:50.543 [2024-12-15 09:57:39.469652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.482814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.482876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.543 [2024-12-15 09:57:39.482888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 00:19:50.543 [2024-12-15 09:57:39.482896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.483316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.483428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.543 [2024-12-15 09:57:39.483438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:50.543 [2024-12-15 09:57:39.483447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.543 [2024-12-15 09:57:39.551449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.543 [2024-12-15 09:57:39.551514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.543 [2024-12-15 09:57:39.551529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.983 ms 00:19:50.543 [2024-12-15 09:57:39.551538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.804 [2024-12-15 09:57:39.563221] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:50.804 [2024-12-15 09:57:39.566353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.804 [2024-12-15 09:57:39.566402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.805 [2024-12-15 09:57:39.566414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.749 ms 00:19:50.805 [2024-12-15 09:57:39.566429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.566515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.566526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.805 [2024-12-15 09:57:39.566535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.805 [2024-12-15 09:57:39.566545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.566615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.566626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.805 [2024-12-15 09:57:39.566634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:50.805 [2024-12-15 09:57:39.566643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.568075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.568125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:50.805 [2024-12-15 09:57:39.568137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:19:50.805 [2024-12-15 09:57:39.568145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.568182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.568191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.805 [2024-12-15 09:57:39.568206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.805 [2024-12-15 09:57:39.568215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.568280] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.805 [2024-12-15 09:57:39.568292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.568303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.805 [2024-12-15 09:57:39.568312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:50.805 [2024-12-15 09:57:39.568321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.595264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.595319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.805 [2024-12-15 09:57:39.595333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.922 ms 00:19:50.805 [2024-12-15 09:57:39.595342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.595441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.805 [2024-12-15 09:57:39.595452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.805 [2024-12-15 09:57:39.595462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:50.805 [2024-12-15 09:57:39.595471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.805 [2024-12-15 09:57:39.596785] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.734 ms, result 0 00:19:51.750  [2024-12-15T09:57:41.711Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-15T09:57:42.657Z] Copying: 22/1024 [MB] (10 MBps) [2024-12-15T09:57:44.045Z] Copying: 33/1024 [MB] (11 MBps) [2024-12-15T09:57:44.660Z] Copying: 43/1024 [MB] (10 MBps) [2024-12-15T09:57:46.049Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-15T09:57:46.620Z] Copying: 94/1024 [MB] (27 MBps) [2024-12-15T09:57:48.007Z] Copying: 130/1024 [MB] (36 MBps) [2024-12-15T09:57:48.952Z] Copying: 140/1024 [MB] (10 MBps) [2024-12-15T09:57:49.897Z] Copying: 170/1024 [MB] (29 MBps) [2024-12-15T09:57:50.842Z] Copying: 187/1024 [MB] (17 MBps) [2024-12-15T09:57:51.787Z] Copying: 202/1024 [MB] (14 MBps) [2024-12-15T09:57:52.732Z] Copying: 223/1024 [MB] (20 MBps) [2024-12-15T09:57:53.678Z] Copying: 237/1024 [MB] (14 MBps) [2024-12-15T09:57:54.623Z] Copying: 254/1024 [MB] (16 MBps) [2024-12-15T09:57:56.012Z] Copying: 275/1024 [MB] (21 MBps) [2024-12-15T09:57:56.956Z] Copying: 292/1024 [MB] (16 MBps) [2024-12-15T09:57:57.898Z] Copying: 312/1024 [MB] (20 MBps) [2024-12-15T09:57:58.843Z] Copying: 329/1024 [MB] (16 MBps) [2024-12-15T09:57:59.786Z] Copying: 345/1024 [MB] (16 MBps) [2024-12-15T09:58:00.729Z] Copying: 362/1024 [MB] (17 MBps) [2024-12-15T09:58:01.675Z] Copying: 374/1024 [MB] (11 MBps) [2024-12-15T09:58:02.619Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-15T09:58:04.006Z] Copying: 394/1024 [MB] (10 MBps) [2024-12-15T09:58:04.950Z] Copying: 405/1024 [MB] (10 MBps) [2024-12-15T09:58:05.901Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-15T09:58:06.855Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-15T09:58:07.800Z] Copying: 436/1024 [MB] (10 MBps) [2024-12-15T09:58:08.747Z] Copying: 447/1024 [MB] (10 MBps) [2024-12-15T09:58:09.694Z] Copying: 457/1024 [MB] (10 MBps) [2024-12-15T09:58:10.638Z] Copying: 467/1024 [MB] (10 MBps) [2024-12-15T09:58:12.025Z] Copying: 488688/1048576 [kB] (10152 kBps) [2024-12-15T09:58:12.968Z] Copying: 490/1024 [MB] (12 MBps) [2024-12-15T09:58:13.981Z] Copying: 500/1024 [MB] (10 MBps) [2024-12-15T09:58:14.927Z] Copying: 518/1024 [MB] (17 MBps) [2024-12-15T09:58:15.870Z] Copying: 535/1024 [MB] (16 MBps) [2024-12-15T09:58:16.816Z] Copying: 547/1024 [MB] (12 MBps) [2024-12-15T09:58:17.759Z] Copying: 560/1024 [MB] (12 MBps) [2024-12-15T09:58:18.700Z] Copying: 575/1024 [MB] (14 MBps) [2024-12-15T09:58:19.641Z] Copying: 595/1024 [MB] (20 MBps) [2024-12-15T09:58:21.029Z] Copying: 607/1024 [MB] (12 MBps) [2024-12-15T09:58:21.973Z] Copying: 619/1024 [MB] (11 MBps) [2024-12-15T09:58:22.918Z] Copying: 644448/1048576 [kB] (10160 kBps) [2024-12-15T09:58:23.864Z] Copying: 644/1024 [MB] (15 MBps) [2024-12-15T09:58:24.809Z] Copying: 657/1024 [MB] (12 MBps) [2024-12-15T09:58:25.752Z] Copying: 668/1024 [MB] (11 MBps) [2024-12-15T09:58:26.696Z] Copying: 678/1024 [MB] (10 MBps) [2024-12-15T09:58:27.642Z] Copying: 691/1024 [MB] (12 MBps) [2024-12-15T09:58:29.024Z] Copying: 707/1024 [MB] (15 MBps) [2024-12-15T09:58:29.961Z] Copying: 724/1024 [MB] (17 MBps) [2024-12-15T09:58:30.904Z] Copying: 756/1024 [MB] (32 MBps) [2024-12-15T09:58:31.849Z] Copying: 772/1024 [MB] (16 MBps) [2024-12-15T09:58:32.795Z] Copying: 789/1024 [MB] (16 MBps) [2024-12-15T09:58:33.740Z] Copying: 799/1024 [MB] (10 MBps) [2024-12-15T09:58:34.685Z] Copying: 818/1024 [MB] (18 MBps) [2024-12-15T09:58:35.632Z] Copying: 833/1024 [MB] (15 MBps) [2024-12-15T09:58:37.018Z] Copying: 847/1024 [MB] (13 MBps) [2024-12-15T09:58:37.962Z] Copying: 860/1024 [MB] (12 MBps) [2024-12-15T09:58:38.906Z] Copying: 876/1024 [MB] (16 MBps) [2024-12-15T09:58:39.850Z] Copying: 895/1024 [MB] (19 MBps) [2024-12-15T09:58:40.794Z] Copying: 914/1024 [MB] (19 MBps) [2024-12-15T09:58:41.739Z] Copying: 936/1024 [MB] (22 MBps) [2024-12-15T09:58:42.700Z] Copying: 957/1024 [MB] (20 MBps) [2024-12-15T09:58:43.654Z] Copying: 967/1024 [MB] (10 MBps) [2024-12-15T09:58:45.041Z] Copying: 990/1024 [MB] (22 MBps) [2024-12-15T09:58:45.613Z] Copying: 1005/1024 [MB] (15 MBps) [2024-12-15T09:58:47.003Z] Copying: 1015/1024 [MB] (10 MBps) [2024-12-15T09:58:47.265Z] Copying: 1048004/1048576 [kB] (8104 kBps) [2024-12-15T09:58:47.265Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-15 09:58:47.143713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.143802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.249 [2024-12-15 09:58:47.143818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.249 [2024-12-15 09:58:47.143827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.145714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.249 [2024-12-15 09:58:47.151130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.151178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.249 [2024-12-15 09:58:47.151191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.360 ms 00:20:58.249 [2024-12-15 09:58:47.151200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.163161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.163217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.249 [2024-12-15 09:58:47.163245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:20:58.249 [2024-12-15 09:58:47.163267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.185003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.185049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.249 [2024-12-15 09:58:47.185061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.716 ms 00:20:58.249 [2024-12-15 09:58:47.185070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.191186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.191225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:58.249 [2024-12-15 09:58:47.191237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.070 ms 00:20:58.249 [2024-12-15 09:58:47.191260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.218272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.218324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.249 [2024-12-15 09:58:47.218337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.941 ms 00:20:58.249 [2024-12-15 09:58:47.218345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.249 [2024-12-15 09:58:47.234137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.249 [2024-12-15 09:58:47.234184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.249 [2024-12-15 09:58:47.234196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.746 ms 00:20:58.249 [2024-12-15 09:58:47.234203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.360061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.511 [2024-12-15 09:58:47.360112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.511 [2024-12-15 09:58:47.360125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.808 ms 00:20:58.511 [2024-12-15 09:58:47.360133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.386482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.511 [2024-12-15 09:58:47.386529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:58.511 [2024-12-15 09:58:47.386542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.326 ms 00:20:58.511 [2024-12-15 09:58:47.386549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.412190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.511 [2024-12-15 09:58:47.412234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:58.511 [2024-12-15 09:58:47.412268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:20:58.511 [2024-12-15 09:58:47.412275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.437420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.511 [2024-12-15 09:58:47.437464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.511 [2024-12-15 09:58:47.437475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.101 ms 00:20:58.511 [2024-12-15 09:58:47.437482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.462319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.511 [2024-12-15 09:58:47.462363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.511 [2024-12-15 09:58:47.462375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.751 ms 00:20:58.511 [2024-12-15 09:58:47.462382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.511 [2024-12-15 09:58:47.462424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.511 [2024-12-15 09:58:47.462441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94208 / 261120 wr_cnt: 1 state: open 00:20:58.511 [2024-12-15 09:58:47.462452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.511 [2024-12-15 09:58:47.462557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.462994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.512 [2024-12-15 09:58:47.463239] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.512 [2024-12-15 09:58:47.463248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47932767-df3c-47cb-a32e-4820bc91e495 00:20:58.512 [2024-12-15 09:58:47.463280] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94208 00:20:58.512 [2024-12-15 09:58:47.463289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95168 00:20:58.512 [2024-12-15 09:58:47.463297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94208 00:20:58.512 [2024-12-15 09:58:47.463310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:20:58.512 [2024-12-15 09:58:47.463318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.513 [2024-12-15 09:58:47.463327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.513 [2024-12-15 09:58:47.463334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.513 [2024-12-15 09:58:47.463348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.513 [2024-12-15 09:58:47.463355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.513 [2024-12-15 09:58:47.463364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.513 [2024-12-15 09:58:47.463372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.513 [2024-12-15 09:58:47.463381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:20:58.513 [2024-12-15 09:58:47.463389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.477035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.513 [2024-12-15 09:58:47.477081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.513 [2024-12-15 09:58:47.477092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.611 ms 00:20:58.513 [2024-12-15 09:58:47.477101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.477342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.513 [2024-12-15 09:58:47.477352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.513 [2024-12-15 09:58:47.477361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:58.513 [2024-12-15 09:58:47.477370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.516389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.513 [2024-12-15 09:58:47.516439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.513 [2024-12-15 09:58:47.516450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.513 [2024-12-15 09:58:47.516458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.516518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.513 [2024-12-15 09:58:47.516527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.513 [2024-12-15 09:58:47.516535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.513 [2024-12-15 09:58:47.516543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.516625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.513 [2024-12-15 09:58:47.516642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.513 [2024-12-15 09:58:47.516651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.513 [2024-12-15 09:58:47.516659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.513 [2024-12-15 09:58:47.516674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.513 [2024-12-15 09:58:47.516682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.513 [2024-12-15 09:58:47.516690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.513 [2024-12-15 09:58:47.516697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.596540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.596608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.774 [2024-12-15 09:58:47.596620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.596629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.628748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.628796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.774 [2024-12-15 09:58:47.628808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.628816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.628884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.628894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.774 [2024-12-15 09:58:47.628911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.628919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.628964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.628975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.774 [2024-12-15 09:58:47.628984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.628991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.629089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.629099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.774 [2024-12-15 09:58:47.629108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.629119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.629155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.629165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.774 [2024-12-15 09:58:47.629173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.629181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.629221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.629231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.774 [2024-12-15 09:58:47.629240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.629251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.629319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.774 [2024-12-15 09:58:47.629335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.774 [2024-12-15 09:58:47.629345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.774 [2024-12-15 09:58:47.629354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.774 [2024-12-15 09:58:47.629483] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.700 ms, result 0 00:21:00.160 00:21:00.160 00:21:00.160 09:58:49 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:00.160 [2024-12-15 09:58:49.122531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:00.160 [2024-12-15 09:58:49.122686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75137 ] 00:21:00.421 [2024-12-15 09:58:49.277912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.681 [2024-12-15 09:58:49.484727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.944 [2024-12-15 09:58:49.769919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.944 [2024-12-15 09:58:49.770000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.944 [2024-12-15 09:58:49.925135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.925198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.944 [2024-12-15 09:58:49.925214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.944 [2024-12-15 09:58:49.925225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.925293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.925304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.944 [2024-12-15 09:58:49.925314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:00.944 [2024-12-15 09:58:49.925322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.925343] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.944 [2024-12-15 09:58:49.926106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.944 [2024-12-15 09:58:49.926140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.926148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.944 [2024-12-15 09:58:49.926157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:21:00.944 [2024-12-15 09:58:49.926165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.927891] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.944 [2024-12-15 09:58:49.942220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.942278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.944 [2024-12-15 09:58:49.942293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.331 ms 00:21:00.944 [2024-12-15 09:58:49.942301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.942373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.942383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.944 [2024-12-15 09:58:49.942392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:00.944 [2024-12-15 09:58:49.942400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.950382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.950426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.944 [2024-12-15 09:58:49.950436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.906 ms 00:21:00.944 [2024-12-15 09:58:49.950444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.950538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.950548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.944 [2024-12-15 09:58:49.950556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:00.944 [2024-12-15 09:58:49.950564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.950610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.950620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.944 [2024-12-15 09:58:49.950630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.944 [2024-12-15 09:58:49.950639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.950670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.944 [2024-12-15 09:58:49.954957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.954997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.944 [2024-12-15 09:58:49.955008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:21:00.944 [2024-12-15 09:58:49.955015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.955054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.955062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.944 [2024-12-15 09:58:49.955070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:00.944 [2024-12-15 09:58:49.955080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.955130] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.944 [2024-12-15 09:58:49.955153] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:00.944 [2024-12-15 09:58:49.955188] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.944 [2024-12-15 09:58:49.955204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:00.944 [2024-12-15 09:58:49.955297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:00.944 [2024-12-15 09:58:49.955309] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.944 [2024-12-15 09:58:49.955322] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:00.944 [2024-12-15 09:58:49.955333] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955341] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955350] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.944 [2024-12-15 09:58:49.955357] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.944 [2024-12-15 09:58:49.955366] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:00.944 [2024-12-15 09:58:49.955374] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:00.944 [2024-12-15 09:58:49.955382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.955390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.944 [2024-12-15 09:58:49.955398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:21:00.944 [2024-12-15 09:58:49.955405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.955468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.944 [2024-12-15 09:58:49.955477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.944 [2024-12-15 09:58:49.955485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:00.944 [2024-12-15 09:58:49.955492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.944 [2024-12-15 09:58:49.955563] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.944 [2024-12-15 09:58:49.955573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.944 [2024-12-15 09:58:49.955581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.944 [2024-12-15 09:58:49.955603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.944 [2024-12-15 09:58:49.955624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.944 [2024-12-15 09:58:49.955637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.944 [2024-12-15 09:58:49.955646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.944 [2024-12-15 09:58:49.955653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.944 [2024-12-15 09:58:49.955660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.944 [2024-12-15 09:58:49.955667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:00.944 [2024-12-15 09:58:49.955675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.944 [2024-12-15 09:58:49.955696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:00.944 [2024-12-15 09:58:49.955703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:00.944 [2024-12-15 09:58:49.955716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:00.944 [2024-12-15 09:58:49.955723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.944 [2024-12-15 09:58:49.955736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.944 [2024-12-15 09:58:49.955757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.944 [2024-12-15 09:58:49.955777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.944 [2024-12-15 09:58:49.955797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:00.944 [2024-12-15 09:58:49.955804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:00.944 [2024-12-15 09:58:49.955810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.944 [2024-12-15 09:58:49.955817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.945 [2024-12-15 09:58:49.955823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.945 [2024-12-15 09:58:49.955829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.945 [2024-12-15 09:58:49.955836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:00.945 [2024-12-15 09:58:49.955843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.945 [2024-12-15 09:58:49.955848] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.945 [2024-12-15 09:58:49.955859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.945 [2024-12-15 09:58:49.955867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.945 [2024-12-15 09:58:49.955876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.945 [2024-12-15 09:58:49.955886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.945 [2024-12-15 09:58:49.955893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.945 [2024-12-15 09:58:49.955899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.945 [2024-12-15 09:58:49.955906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.945 [2024-12-15 09:58:49.955913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.945 [2024-12-15 09:58:49.955920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.945 [2024-12-15 09:58:49.955927] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.945 [2024-12-15 09:58:49.955937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.945 [2024-12-15 09:58:49.955946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.945 [2024-12-15 09:58:49.955953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:00.945 [2024-12-15 09:58:49.955960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:00.945 [2024-12-15 09:58:49.955969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:00.945 [2024-12-15 09:58:49.955976] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:00.945 [2024-12-15 09:58:49.955983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:00.945 [2024-12-15 09:58:49.955990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:00.945 [2024-12-15 09:58:49.955997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:00.945 [2024-12-15 09:58:49.956004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:00.945 [2024-12-15 09:58:49.956011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:00.945 [2024-12-15 09:58:49.956018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:00.945 [2024-12-15 09:58:49.956026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:00.945 [2024-12-15 09:58:49.956034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:00.945 [2024-12-15 09:58:49.956050] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.945 [2024-12-15 09:58:49.956059] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.945 [2024-12-15 09:58:49.956066] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.945 [2024-12-15 09:58:49.956073] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.945 [2024-12-15 09:58:49.956080] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.945 [2024-12-15 09:58:49.956088] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.945 [2024-12-15 09:58:49.956095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.945 [2024-12-15 09:58:49.956103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.945 [2024-12-15 09:58:49.956111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:21:00.945 [2024-12-15 09:58:49.956119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.206 [2024-12-15 09:58:49.974185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.206 [2024-12-15 09:58:49.974233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.206 [2024-12-15 09:58:49.974245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.022 ms 00:21:01.206 [2024-12-15 09:58:49.974275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.206 [2024-12-15 09:58:49.974369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.206 [2024-12-15 09:58:49.974379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.206 [2024-12-15 09:58:49.974390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:01.206 [2024-12-15 09:58:49.974399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.206 [2024-12-15 09:58:50.020864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.206 [2024-12-15 09:58:50.020919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.207 [2024-12-15 09:58:50.020931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.413 ms 00:21:01.207 [2024-12-15 09:58:50.020941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.020991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.021001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.207 [2024-12-15 09:58:50.021010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.207 [2024-12-15 09:58:50.021019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.021632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.021666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.207 [2024-12-15 09:58:50.021677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:21:01.207 [2024-12-15 09:58:50.021692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.021821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.021831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.207 [2024-12-15 09:58:50.021840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:01.207 [2024-12-15 09:58:50.021848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.039026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.039070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.207 [2024-12-15 09:58:50.039082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.153 ms 00:21:01.207 [2024-12-15 09:58:50.039090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.053249] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:01.207 [2024-12-15 09:58:50.053304] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:01.207 [2024-12-15 09:58:50.053316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.053325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:01.207 [2024-12-15 09:58:50.053335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.115 ms 00:21:01.207 [2024-12-15 09:58:50.053343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.079134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.079186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:01.207 [2024-12-15 09:58:50.079200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.737 ms 00:21:01.207 [2024-12-15 09:58:50.079210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.092754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.092800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:01.207 [2024-12-15 09:58:50.092812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.476 ms 00:21:01.207 [2024-12-15 09:58:50.092820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.105800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.105846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:01.207 [2024-12-15 09:58:50.105869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.933 ms 00:21:01.207 [2024-12-15 09:58:50.105877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.106298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.106326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.207 [2024-12-15 09:58:50.106336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:21:01.207 [2024-12-15 09:58:50.106345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.173295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.173355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.207 [2024-12-15 09:58:50.173371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.930 ms 00:21:01.207 [2024-12-15 09:58:50.173379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.184763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:01.207 [2024-12-15 09:58:50.187608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.187652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.207 [2024-12-15 09:58:50.187664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.168 ms 00:21:01.207 [2024-12-15 09:58:50.187678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.187751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.187762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.207 [2024-12-15 09:58:50.187771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.207 [2024-12-15 09:58:50.187779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.189166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.189216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.207 [2024-12-15 09:58:50.189226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:21:01.207 [2024-12-15 09:58:50.189235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.190602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.190643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:01.207 [2024-12-15 09:58:50.190654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:21:01.207 [2024-12-15 09:58:50.190662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.190697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.190707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.207 [2024-12-15 09:58:50.190722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.207 [2024-12-15 09:58:50.190729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.190766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.207 [2024-12-15 09:58:50.190777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.190788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.207 [2024-12-15 09:58:50.190797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:01.207 [2024-12-15 09:58:50.190805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.216922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.216969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.207 [2024-12-15 09:58:50.216982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:21:01.207 [2024-12-15 09:58:50.216991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.207 [2024-12-15 09:58:50.217082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.207 [2024-12-15 09:58:50.217092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.207 [2024-12-15 09:58:50.217101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:01.207 [2024-12-15 09:58:50.217110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.469 [2024-12-15 09:58:50.222192] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.921 ms, result 0 00:21:02.414  [2024-12-15T09:58:52.811Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-15T09:58:53.754Z] Copying: 42/1024 [MB] (20 MBps) [2024-12-15T09:58:54.697Z] Copying: 59/1024 [MB] (16 MBps) [2024-12-15T09:58:55.646Z] Copying: 69/1024 [MB] (10 MBps) [2024-12-15T09:58:56.590Z] Copying: 85/1024 [MB] (15 MBps) [2024-12-15T09:58:57.532Z] Copying: 100/1024 [MB] (14 MBps) [2024-12-15T09:58:58.477Z] Copying: 116/1024 [MB] (15 MBps) [2024-12-15T09:58:59.422Z] Copying: 130/1024 [MB] (14 MBps) [2024-12-15T09:59:00.809Z] Copying: 147/1024 [MB] (16 MBps) [2024-12-15T09:59:01.753Z] Copying: 162/1024 [MB] (15 MBps) [2024-12-15T09:59:02.694Z] Copying: 181/1024 [MB] (18 MBps) [2024-12-15T09:59:03.636Z] Copying: 197/1024 [MB] (16 MBps) [2024-12-15T09:59:04.579Z] Copying: 220/1024 [MB] (22 MBps) [2024-12-15T09:59:05.523Z] Copying: 243/1024 [MB] (22 MBps) [2024-12-15T09:59:06.469Z] Copying: 254/1024 [MB] (11 MBps) [2024-12-15T09:59:07.414Z] Copying: 264/1024 [MB] (10 MBps) [2024-12-15T09:59:08.802Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-15T09:59:09.746Z] Copying: 285/1024 [MB] (10 MBps) [2024-12-15T09:59:10.687Z] Copying: 295/1024 [MB] (10 MBps) [2024-12-15T09:59:11.675Z] Copying: 315/1024 [MB] (19 MBps) [2024-12-15T09:59:12.618Z] Copying: 326/1024 [MB] (11 MBps) [2024-12-15T09:59:13.562Z] Copying: 339/1024 [MB] (12 MBps) [2024-12-15T09:59:14.508Z] Copying: 352/1024 [MB] (13 MBps) [2024-12-15T09:59:15.451Z] Copying: 368/1024 [MB] (15 MBps) [2024-12-15T09:59:16.836Z] Copying: 378/1024 [MB] (10 MBps) [2024-12-15T09:59:17.780Z] Copying: 390/1024 [MB] (11 MBps) [2024-12-15T09:59:18.718Z] Copying: 400/1024 [MB] (10 MBps) [2024-12-15T09:59:19.655Z] Copying: 413/1024 [MB] (12 MBps) [2024-12-15T09:59:20.600Z] Copying: 436/1024 [MB] (23 MBps) [2024-12-15T09:59:21.541Z] Copying: 454/1024 [MB] (18 MBps) [2024-12-15T09:59:22.483Z] Copying: 471/1024 [MB] (16 MBps) [2024-12-15T09:59:23.425Z] Copying: 491/1024 [MB] (19 MBps) [2024-12-15T09:59:24.809Z] Copying: 503/1024 [MB] (11 MBps) [2024-12-15T09:59:25.751Z] Copying: 531/1024 [MB] (27 MBps) [2024-12-15T09:59:26.693Z] Copying: 541/1024 [MB] (10 MBps) [2024-12-15T09:59:27.633Z] Copying: 557/1024 [MB] (15 MBps) [2024-12-15T09:59:28.572Z] Copying: 567/1024 [MB] (10 MBps) [2024-12-15T09:59:29.514Z] Copying: 585/1024 [MB] (17 MBps) [2024-12-15T09:59:30.458Z] Copying: 602/1024 [MB] (17 MBps) [2024-12-15T09:59:31.841Z] Copying: 613/1024 [MB] (10 MBps) [2024-12-15T09:59:32.782Z] Copying: 634/1024 [MB] (20 MBps) [2024-12-15T09:59:33.724Z] Copying: 646/1024 [MB] (12 MBps) [2024-12-15T09:59:34.668Z] Copying: 657/1024 [MB] (10 MBps) [2024-12-15T09:59:35.619Z] Copying: 671/1024 [MB] (14 MBps) [2024-12-15T09:59:36.563Z] Copying: 686/1024 [MB] (14 MBps) [2024-12-15T09:59:37.504Z] Copying: 702/1024 [MB] (15 MBps) [2024-12-15T09:59:38.449Z] Copying: 721/1024 [MB] (18 MBps) [2024-12-15T09:59:39.837Z] Copying: 737/1024 [MB] (16 MBps) [2024-12-15T09:59:40.466Z] Copying: 748/1024 [MB] (10 MBps) [2024-12-15T09:59:41.853Z] Copying: 759/1024 [MB] (10 MBps) [2024-12-15T09:59:42.425Z] Copying: 769/1024 [MB] (10 MBps) [2024-12-15T09:59:43.812Z] Copying: 780/1024 [MB] (10 MBps) [2024-12-15T09:59:44.756Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-15T09:59:45.698Z] Copying: 803/1024 [MB] (11 MBps) [2024-12-15T09:59:46.641Z] Copying: 813/1024 [MB] (10 MBps) [2024-12-15T09:59:47.582Z] Copying: 824/1024 [MB] (10 MBps) [2024-12-15T09:59:48.526Z] Copying: 839/1024 [MB] (15 MBps) [2024-12-15T09:59:49.469Z] Copying: 856/1024 [MB] (16 MBps) [2024-12-15T09:59:50.855Z] Copying: 870/1024 [MB] (13 MBps) [2024-12-15T09:59:51.424Z] Copying: 883/1024 [MB] (13 MBps) [2024-12-15T09:59:52.811Z] Copying: 897/1024 [MB] (14 MBps) [2024-12-15T09:59:53.753Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-15T09:59:54.696Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-15T09:59:55.638Z] Copying: 933/1024 [MB] (15 MBps) [2024-12-15T09:59:56.580Z] Copying: 944/1024 [MB] (11 MBps) [2024-12-15T09:59:57.520Z] Copying: 955/1024 [MB] (10 MBps) [2024-12-15T09:59:58.460Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-15T09:59:59.844Z] Copying: 981/1024 [MB] (10 MBps) [2024-12-15T10:00:00.416Z] Copying: 992/1024 [MB] (10 MBps) [2024-12-15T10:00:01.799Z] Copying: 1007/1024 [MB] (14 MBps) [2024-12-15T10:00:01.799Z] Copying: 1017/1024 [MB] (10 MBps) [2024-12-15T10:00:01.799Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-15 10:00:01.780355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.783 [2024-12-15 10:00:01.780447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.783 [2024-12-15 10:00:01.780476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:12.783 [2024-12-15 10:00:01.780485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.783 [2024-12-15 10:00:01.780509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.783 [2024-12-15 10:00:01.792191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.783 [2024-12-15 10:00:01.792280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.783 [2024-12-15 10:00:01.792309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.660 ms 00:22:12.784 [2024-12-15 10:00:01.792329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.784 [2024-12-15 10:00:01.792870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.784 [2024-12-15 10:00:01.792891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.784 [2024-12-15 10:00:01.792907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:22:12.784 [2024-12-15 10:00:01.792916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.051 [2024-12-15 10:00:01.799422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.051 [2024-12-15 10:00:01.799463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:13.051 [2024-12-15 10:00:01.799474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:22:13.051 [2024-12-15 10:00:01.799483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.051 [2024-12-15 10:00:01.805640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.051 [2024-12-15 10:00:01.805678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:13.051 [2024-12-15 10:00:01.805689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:22:13.051 [2024-12-15 10:00:01.805705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.051 [2024-12-15 10:00:01.832978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.051 [2024-12-15 10:00:01.833021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:13.051 [2024-12-15 10:00:01.833033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.202 ms 00:22:13.051 [2024-12-15 10:00:01.833041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.051 [2024-12-15 10:00:01.849077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.051 [2024-12-15 10:00:01.849120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:13.051 [2024-12-15 10:00:01.849132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.988 ms 00:22:13.051 [2024-12-15 10:00:01.849140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.155608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.313 [2024-12-15 10:00:02.155665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:13.313 [2024-12-15 10:00:02.155678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 306.433 ms 00:22:13.313 [2024-12-15 10:00:02.155687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.181878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.313 [2024-12-15 10:00:02.181928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:13.313 [2024-12-15 10:00:02.181939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.168 ms 00:22:13.313 [2024-12-15 10:00:02.181946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.207985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.313 [2024-12-15 10:00:02.208034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:13.313 [2024-12-15 10:00:02.208045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.993 ms 00:22:13.313 [2024-12-15 10:00:02.208064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.233304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.313 [2024-12-15 10:00:02.233355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.313 [2024-12-15 10:00:02.233367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.192 ms 00:22:13.313 [2024-12-15 10:00:02.233375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.258569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.313 [2024-12-15 10:00:02.258619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.313 [2024-12-15 10:00:02.258631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.104 ms 00:22:13.313 [2024-12-15 10:00:02.258638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.313 [2024-12-15 10:00:02.258683] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.313 [2024-12-15 10:00:02.258698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:13.313 [2024-12-15 10:00:02.258709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:13.313 [2024-12-15 10:00:02.258718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.258998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.314 [2024-12-15 10:00:02.259455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.315 [2024-12-15 10:00:02.259512] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.315 [2024-12-15 10:00:02.259521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 47932767-df3c-47cb-a32e-4820bc91e495 00:22:13.315 [2024-12-15 10:00:02.259531] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:13.315 [2024-12-15 10:00:02.259538] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 40640 00:22:13.315 [2024-12-15 10:00:02.259547] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 39680 00:22:13.315 [2024-12-15 10:00:02.259563] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0242 00:22:13.315 [2024-12-15 10:00:02.259570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.315 [2024-12-15 10:00:02.259580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.315 [2024-12-15 10:00:02.259588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.315 [2024-12-15 10:00:02.259595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.315 [2024-12-15 10:00:02.259610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.315 [2024-12-15 10:00:02.259618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.315 [2024-12-15 10:00:02.259626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.315 [2024-12-15 10:00:02.259635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:22:13.315 [2024-12-15 10:00:02.259643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.273087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.315 [2024-12-15 10:00:02.273138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.315 [2024-12-15 10:00:02.273149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.408 ms 00:22:13.315 [2024-12-15 10:00:02.273158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.273403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.315 [2024-12-15 10:00:02.273414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.315 [2024-12-15 10:00:02.273423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:22:13.315 [2024-12-15 10:00:02.273430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.312319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.315 [2024-12-15 10:00:02.312377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.315 [2024-12-15 10:00:02.312390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.315 [2024-12-15 10:00:02.312397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.312469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.315 [2024-12-15 10:00:02.312478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.315 [2024-12-15 10:00:02.312486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.315 [2024-12-15 10:00:02.312493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.312588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.315 [2024-12-15 10:00:02.312605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.315 [2024-12-15 10:00:02.312612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.315 [2024-12-15 10:00:02.312620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.315 [2024-12-15 10:00:02.312636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.315 [2024-12-15 10:00:02.312644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.315 [2024-12-15 10:00:02.312652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.315 [2024-12-15 10:00:02.312659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.575 [2024-12-15 10:00:02.394585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.575 [2024-12-15 10:00:02.394648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.575 [2024-12-15 10:00:02.394659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.575 [2024-12-15 10:00:02.394668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.575 [2024-12-15 10:00:02.427322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.575 [2024-12-15 10:00:02.427372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.576 [2024-12-15 10:00:02.427384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.576 [2024-12-15 10:00:02.427486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.576 [2024-12-15 10:00:02.427558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.576 [2024-12-15 10:00:02.427692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.576 [2024-12-15 10:00:02.427752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.576 [2024-12-15 10:00:02.427818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.427876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.576 [2024-12-15 10:00:02.427885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.576 [2024-12-15 10:00:02.427893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.576 [2024-12-15 10:00:02.427903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.576 [2024-12-15 10:00:02.428033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 647.646 ms, result 0 00:22:14.519 00:22:14.519 00:22:14.519 10:00:03 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:16.435 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:16.435 10:00:05 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:16.435 10:00:05 -- ftl/restore.sh@85 -- # restore_kill 00:22:16.435 10:00:05 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:16.696 10:00:05 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:16.696 10:00:05 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:16.696 Process with pid 72848 is not found 00:22:16.696 10:00:05 -- ftl/restore.sh@32 -- # killprocess 72848 00:22:16.696 10:00:05 -- common/autotest_common.sh@936 -- # '[' -z 72848 ']' 00:22:16.696 10:00:05 -- common/autotest_common.sh@940 -- # kill -0 72848 00:22:16.696 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72848) - No such process 00:22:16.696 10:00:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72848 is not found' 00:22:16.696 10:00:05 -- ftl/restore.sh@33 -- # remove_shm 00:22:16.696 Remove shared memory files 00:22:16.696 10:00:05 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:16.696 10:00:05 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:16.696 10:00:05 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:16.696 10:00:05 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:16.696 10:00:05 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:16.696 10:00:05 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:16.696 ************************************ 00:22:16.696 END TEST ftl_restore 00:22:16.696 ************************************ 00:22:16.696 00:22:16.696 real 4m54.581s 00:22:16.696 user 4m41.663s 00:22:16.696 sys 0m12.599s 00:22:16.696 10:00:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:16.696 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:16.696 10:00:05 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:16.696 10:00:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:16.696 10:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.696 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:16.696 ************************************ 00:22:16.696 START TEST ftl_dirty_shutdown 00:22:16.696 ************************************ 00:22:16.696 10:00:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:16.959 * Looking for test storage... 00:22:16.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.959 10:00:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:16.959 10:00:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:16.959 10:00:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:16.959 10:00:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:16.959 10:00:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:16.959 10:00:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:16.959 10:00:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:16.959 10:00:05 -- scripts/common.sh@335 -- # IFS=.-: 00:22:16.959 10:00:05 -- scripts/common.sh@335 -- # read -ra ver1 00:22:16.959 10:00:05 -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.959 10:00:05 -- scripts/common.sh@336 -- # read -ra ver2 00:22:16.959 10:00:05 -- scripts/common.sh@337 -- # local 'op=<' 00:22:16.959 10:00:05 -- scripts/common.sh@339 -- # ver1_l=2 00:22:16.959 10:00:05 -- scripts/common.sh@340 -- # ver2_l=1 00:22:16.959 10:00:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:16.959 10:00:05 -- scripts/common.sh@343 -- # case "$op" in 00:22:16.959 10:00:05 -- scripts/common.sh@344 -- # : 1 00:22:16.959 10:00:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:16.959 10:00:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.959 10:00:05 -- scripts/common.sh@364 -- # decimal 1 00:22:16.959 10:00:05 -- scripts/common.sh@352 -- # local d=1 00:22:16.959 10:00:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.959 10:00:05 -- scripts/common.sh@354 -- # echo 1 00:22:16.959 10:00:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:16.959 10:00:05 -- scripts/common.sh@365 -- # decimal 2 00:22:16.959 10:00:05 -- scripts/common.sh@352 -- # local d=2 00:22:16.959 10:00:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.959 10:00:05 -- scripts/common.sh@354 -- # echo 2 00:22:16.959 10:00:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:16.959 10:00:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:16.959 10:00:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:16.959 10:00:05 -- scripts/common.sh@367 -- # return 0 00:22:16.959 10:00:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.959 10:00:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.959 --rc genhtml_branch_coverage=1 00:22:16.959 --rc genhtml_function_coverage=1 00:22:16.959 --rc genhtml_legend=1 00:22:16.959 --rc geninfo_all_blocks=1 00:22:16.959 --rc geninfo_unexecuted_blocks=1 00:22:16.959 00:22:16.959 ' 00:22:16.959 10:00:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.959 --rc genhtml_branch_coverage=1 00:22:16.959 --rc genhtml_function_coverage=1 00:22:16.959 --rc genhtml_legend=1 00:22:16.959 --rc geninfo_all_blocks=1 00:22:16.959 --rc geninfo_unexecuted_blocks=1 00:22:16.959 00:22:16.959 ' 00:22:16.959 10:00:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.959 --rc genhtml_branch_coverage=1 00:22:16.959 --rc genhtml_function_coverage=1 00:22:16.959 --rc genhtml_legend=1 00:22:16.959 --rc geninfo_all_blocks=1 00:22:16.959 --rc geninfo_unexecuted_blocks=1 00:22:16.959 00:22:16.959 ' 00:22:16.959 10:00:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.959 --rc genhtml_branch_coverage=1 00:22:16.959 --rc genhtml_function_coverage=1 00:22:16.959 --rc genhtml_legend=1 00:22:16.959 --rc geninfo_all_blocks=1 00:22:16.959 --rc geninfo_unexecuted_blocks=1 00:22:16.959 00:22:16.959 ' 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:16.959 10:00:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:16.959 10:00:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.959 10:00:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.959 10:00:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:16.959 10:00:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:16.959 10:00:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.959 10:00:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:16.959 10:00:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:16.959 10:00:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.959 10:00:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.959 10:00:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:16.959 10:00:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:16.959 10:00:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:16.959 10:00:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:16.959 10:00:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:16.959 10:00:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:16.959 10:00:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.959 10:00:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.959 10:00:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:16.959 10:00:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:16.959 10:00:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:16.959 10:00:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:16.959 10:00:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:16.959 10:00:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:16.959 10:00:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:16.959 10:00:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:16.959 10:00:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:16.959 10:00:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75992 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75992 00:22:16.959 10:00:05 -- common/autotest_common.sh@829 -- # '[' -z 75992 ']' 00:22:16.959 10:00:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.959 10:00:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.959 10:00:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.959 10:00:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.959 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:16.959 10:00:05 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:16.959 [2024-12-15 10:00:05.924433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:16.959 [2024-12-15 10:00:05.924588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75992 ] 00:22:17.221 [2024-12-15 10:00:06.077460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.483 [2024-12-15 10:00:06.302014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:17.483 [2024-12-15 10:00:06.302241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.428 10:00:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.428 10:00:07 -- common/autotest_common.sh@862 -- # return 0 00:22:18.690 10:00:07 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:18.690 10:00:07 -- ftl/common.sh@54 -- # local name=nvme0 00:22:18.690 10:00:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:18.690 10:00:07 -- ftl/common.sh@56 -- # local size=103424 00:22:18.690 10:00:07 -- ftl/common.sh@59 -- # local base_bdev 00:22:18.690 10:00:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:18.952 10:00:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:18.952 10:00:07 -- ftl/common.sh@62 -- # local base_size 00:22:18.952 10:00:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:18.952 10:00:07 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:18.952 10:00:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:18.952 10:00:07 -- common/autotest_common.sh@1369 -- # local bs 00:22:18.952 10:00:07 -- common/autotest_common.sh@1370 -- # local nb 00:22:18.952 10:00:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:18.952 10:00:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:18.952 { 00:22:18.952 "name": "nvme0n1", 00:22:18.952 "aliases": [ 00:22:18.952 "3aad09c2-04c2-4e6e-b4d2-b6b69d108dde" 00:22:18.952 ], 00:22:18.952 "product_name": "NVMe disk", 00:22:18.952 "block_size": 4096, 00:22:18.952 "num_blocks": 1310720, 00:22:18.952 "uuid": "3aad09c2-04c2-4e6e-b4d2-b6b69d108dde", 00:22:18.952 "assigned_rate_limits": { 00:22:18.952 "rw_ios_per_sec": 0, 00:22:18.952 "rw_mbytes_per_sec": 0, 00:22:18.952 "r_mbytes_per_sec": 0, 00:22:18.952 "w_mbytes_per_sec": 0 00:22:18.952 }, 00:22:18.952 "claimed": true, 00:22:18.952 "claim_type": "read_many_write_one", 00:22:18.952 "zoned": false, 00:22:18.952 "supported_io_types": { 00:22:18.952 "read": true, 00:22:18.952 "write": true, 00:22:18.952 "unmap": true, 00:22:18.952 "write_zeroes": true, 00:22:18.952 "flush": true, 00:22:18.952 "reset": true, 00:22:18.952 "compare": true, 00:22:18.952 "compare_and_write": false, 00:22:18.952 "abort": true, 00:22:18.952 "nvme_admin": true, 00:22:18.952 "nvme_io": true 00:22:18.952 }, 00:22:18.952 "driver_specific": { 00:22:18.952 "nvme": [ 00:22:18.952 { 00:22:18.952 "pci_address": "0000:00:07.0", 00:22:18.952 "trid": { 00:22:18.952 "trtype": "PCIe", 00:22:18.952 "traddr": "0000:00:07.0" 00:22:18.952 }, 00:22:18.952 "ctrlr_data": { 00:22:18.952 "cntlid": 0, 00:22:18.952 "vendor_id": "0x1b36", 00:22:18.952 "model_number": "QEMU NVMe Ctrl", 00:22:18.952 "serial_number": "12341", 00:22:18.952 "firmware_revision": "8.0.0", 00:22:18.952 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:18.952 "oacs": { 00:22:18.952 "security": 0, 00:22:18.952 "format": 1, 00:22:18.952 "firmware": 0, 00:22:18.952 "ns_manage": 1 00:22:18.952 }, 00:22:18.952 "multi_ctrlr": false, 00:22:18.952 "ana_reporting": false 00:22:18.952 }, 00:22:18.952 "vs": { 00:22:18.952 "nvme_version": "1.4" 00:22:18.952 }, 00:22:18.952 "ns_data": { 00:22:18.952 "id": 1, 00:22:18.952 "can_share": false 00:22:18.952 } 00:22:18.952 } 00:22:18.953 ], 00:22:18.953 "mp_policy": "active_passive" 00:22:18.953 } 00:22:18.953 } 00:22:18.953 ]' 00:22:18.953 10:00:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:18.953 10:00:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:18.953 10:00:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:19.212 10:00:07 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:19.212 10:00:07 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:19.212 10:00:07 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:19.212 10:00:07 -- ftl/common.sh@63 -- # base_size=5120 00:22:19.212 10:00:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:19.212 10:00:07 -- ftl/common.sh@67 -- # clear_lvols 00:22:19.212 10:00:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:19.212 10:00:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:19.212 10:00:08 -- ftl/common.sh@28 -- # stores=d23bd721-b694-4582-a2c6-faa0c9305525 00:22:19.212 10:00:08 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:19.212 10:00:08 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d23bd721-b694-4582-a2c6-faa0c9305525 00:22:19.480 10:00:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:19.771 10:00:08 -- ftl/common.sh@68 -- # lvs=369b6992-0167-445b-915d-f3006b2fd532 00:22:19.771 10:00:08 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 369b6992-0167-445b-915d-f3006b2fd532 00:22:19.771 10:00:08 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=543bf910-d310-44c9-b32b-a78f417269b8 00:22:19.771 10:00:08 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:19.771 10:00:08 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 543bf910-d310-44c9-b32b-a78f417269b8 00:22:19.771 10:00:08 -- ftl/common.sh@35 -- # local name=nvc0 00:22:19.771 10:00:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:19.771 10:00:08 -- ftl/common.sh@37 -- # local base_bdev=543bf910-d310-44c9-b32b-a78f417269b8 00:22:19.771 10:00:08 -- ftl/common.sh@38 -- # local cache_size= 00:22:19.771 10:00:08 -- ftl/common.sh@41 -- # get_bdev_size 543bf910-d310-44c9-b32b-a78f417269b8 00:22:19.771 10:00:08 -- common/autotest_common.sh@1367 -- # local bdev_name=543bf910-d310-44c9-b32b-a78f417269b8 00:22:19.771 10:00:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:19.771 10:00:08 -- common/autotest_common.sh@1369 -- # local bs 00:22:19.771 10:00:08 -- common/autotest_common.sh@1370 -- # local nb 00:22:19.771 10:00:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.054 10:00:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:20.054 { 00:22:20.054 "name": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:20.054 "aliases": [ 00:22:20.054 "lvs/nvme0n1p0" 00:22:20.054 ], 00:22:20.054 "product_name": "Logical Volume", 00:22:20.054 "block_size": 4096, 00:22:20.054 "num_blocks": 26476544, 00:22:20.054 "uuid": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:20.054 "assigned_rate_limits": { 00:22:20.054 "rw_ios_per_sec": 0, 00:22:20.054 "rw_mbytes_per_sec": 0, 00:22:20.054 "r_mbytes_per_sec": 0, 00:22:20.054 "w_mbytes_per_sec": 0 00:22:20.054 }, 00:22:20.054 "claimed": false, 00:22:20.054 "zoned": false, 00:22:20.055 "supported_io_types": { 00:22:20.055 "read": true, 00:22:20.055 "write": true, 00:22:20.055 "unmap": true, 00:22:20.055 "write_zeroes": true, 00:22:20.055 "flush": false, 00:22:20.055 "reset": true, 00:22:20.055 "compare": false, 00:22:20.055 "compare_and_write": false, 00:22:20.055 "abort": false, 00:22:20.055 "nvme_admin": false, 00:22:20.055 "nvme_io": false 00:22:20.055 }, 00:22:20.055 "driver_specific": { 00:22:20.055 "lvol": { 00:22:20.055 "lvol_store_uuid": "369b6992-0167-445b-915d-f3006b2fd532", 00:22:20.055 "base_bdev": "nvme0n1", 00:22:20.055 "thin_provision": true, 00:22:20.055 "snapshot": false, 00:22:20.055 "clone": false, 00:22:20.055 "esnap_clone": false 00:22:20.055 } 00:22:20.055 } 00:22:20.055 } 00:22:20.055 ]' 00:22:20.055 10:00:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:20.055 10:00:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:20.055 10:00:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:20.055 10:00:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:20.055 10:00:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:20.055 10:00:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:20.055 10:00:09 -- ftl/common.sh@41 -- # local base_size=5171 00:22:20.055 10:00:09 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:20.055 10:00:09 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:20.316 10:00:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:20.316 10:00:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:20.316 10:00:09 -- ftl/common.sh@48 -- # get_bdev_size 543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.316 10:00:09 -- common/autotest_common.sh@1367 -- # local bdev_name=543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.316 10:00:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:20.316 10:00:09 -- common/autotest_common.sh@1369 -- # local bs 00:22:20.316 10:00:09 -- common/autotest_common.sh@1370 -- # local nb 00:22:20.316 10:00:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.577 10:00:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:20.578 { 00:22:20.578 "name": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:20.578 "aliases": [ 00:22:20.578 "lvs/nvme0n1p0" 00:22:20.578 ], 00:22:20.578 "product_name": "Logical Volume", 00:22:20.578 "block_size": 4096, 00:22:20.578 "num_blocks": 26476544, 00:22:20.578 "uuid": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:20.578 "assigned_rate_limits": { 00:22:20.578 "rw_ios_per_sec": 0, 00:22:20.578 "rw_mbytes_per_sec": 0, 00:22:20.578 "r_mbytes_per_sec": 0, 00:22:20.578 "w_mbytes_per_sec": 0 00:22:20.578 }, 00:22:20.578 "claimed": false, 00:22:20.578 "zoned": false, 00:22:20.578 "supported_io_types": { 00:22:20.578 "read": true, 00:22:20.578 "write": true, 00:22:20.578 "unmap": true, 00:22:20.578 "write_zeroes": true, 00:22:20.578 "flush": false, 00:22:20.578 "reset": true, 00:22:20.578 "compare": false, 00:22:20.578 "compare_and_write": false, 00:22:20.578 "abort": false, 00:22:20.578 "nvme_admin": false, 00:22:20.578 "nvme_io": false 00:22:20.578 }, 00:22:20.578 "driver_specific": { 00:22:20.578 "lvol": { 00:22:20.578 "lvol_store_uuid": "369b6992-0167-445b-915d-f3006b2fd532", 00:22:20.578 "base_bdev": "nvme0n1", 00:22:20.578 "thin_provision": true, 00:22:20.578 "snapshot": false, 00:22:20.578 "clone": false, 00:22:20.578 "esnap_clone": false 00:22:20.578 } 00:22:20.578 } 00:22:20.578 } 00:22:20.578 ]' 00:22:20.578 10:00:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:20.578 10:00:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:20.578 10:00:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:20.578 10:00:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:20.578 10:00:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:20.578 10:00:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:20.578 10:00:09 -- ftl/common.sh@48 -- # cache_size=5171 00:22:20.578 10:00:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:20.839 10:00:09 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:20.839 10:00:09 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.839 10:00:09 -- common/autotest_common.sh@1367 -- # local bdev_name=543bf910-d310-44c9-b32b-a78f417269b8 00:22:20.839 10:00:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:20.839 10:00:09 -- common/autotest_common.sh@1369 -- # local bs 00:22:20.839 10:00:09 -- common/autotest_common.sh@1370 -- # local nb 00:22:20.839 10:00:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 543bf910-d310-44c9-b32b-a78f417269b8 00:22:21.100 10:00:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:21.100 { 00:22:21.100 "name": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:21.100 "aliases": [ 00:22:21.100 "lvs/nvme0n1p0" 00:22:21.100 ], 00:22:21.100 "product_name": "Logical Volume", 00:22:21.100 "block_size": 4096, 00:22:21.100 "num_blocks": 26476544, 00:22:21.100 "uuid": "543bf910-d310-44c9-b32b-a78f417269b8", 00:22:21.100 "assigned_rate_limits": { 00:22:21.100 "rw_ios_per_sec": 0, 00:22:21.100 "rw_mbytes_per_sec": 0, 00:22:21.100 "r_mbytes_per_sec": 0, 00:22:21.100 "w_mbytes_per_sec": 0 00:22:21.100 }, 00:22:21.100 "claimed": false, 00:22:21.100 "zoned": false, 00:22:21.100 "supported_io_types": { 00:22:21.100 "read": true, 00:22:21.100 "write": true, 00:22:21.100 "unmap": true, 00:22:21.100 "write_zeroes": true, 00:22:21.100 "flush": false, 00:22:21.100 "reset": true, 00:22:21.100 "compare": false, 00:22:21.100 "compare_and_write": false, 00:22:21.100 "abort": false, 00:22:21.100 "nvme_admin": false, 00:22:21.100 "nvme_io": false 00:22:21.100 }, 00:22:21.100 "driver_specific": { 00:22:21.100 "lvol": { 00:22:21.100 "lvol_store_uuid": "369b6992-0167-445b-915d-f3006b2fd532", 00:22:21.100 "base_bdev": "nvme0n1", 00:22:21.100 "thin_provision": true, 00:22:21.100 "snapshot": false, 00:22:21.100 "clone": false, 00:22:21.100 "esnap_clone": false 00:22:21.100 } 00:22:21.100 } 00:22:21.100 } 00:22:21.100 ]' 00:22:21.100 10:00:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:21.100 10:00:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:21.100 10:00:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:21.100 10:00:10 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:21.100 10:00:10 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:21.100 10:00:10 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 543bf910-d310-44c9-b32b-a78f417269b8 --l2p_dram_limit 10' 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:21.100 10:00:10 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 543bf910-d310-44c9-b32b-a78f417269b8 --l2p_dram_limit 10 -c nvc0n1p0 00:22:21.361 [2024-12-15 10:00:10.219522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.219589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.361 [2024-12-15 10:00:10.219608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:21.361 [2024-12-15 10:00:10.219619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.219693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.219704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.361 [2024-12-15 10:00:10.219716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:21.361 [2024-12-15 10:00:10.219725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.219749] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.361 [2024-12-15 10:00:10.220671] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.361 [2024-12-15 10:00:10.220700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.220709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.361 [2024-12-15 10:00:10.220721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:22:21.361 [2024-12-15 10:00:10.220728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.220771] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 439e8229-b4b3-4098-a525-37f17a62794a 00:22:21.361 [2024-12-15 10:00:10.222646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.222692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:21.361 [2024-12-15 10:00:10.222703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:21.361 [2024-12-15 10:00:10.222714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.231746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.231788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.361 [2024-12-15 10:00:10.231799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.938 ms 00:22:21.361 [2024-12-15 10:00:10.231809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.231916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.231929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.361 [2024-12-15 10:00:10.231938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:21.361 [2024-12-15 10:00:10.231953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.232009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.232024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.361 [2024-12-15 10:00:10.232033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:21.361 [2024-12-15 10:00:10.232042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.232069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.361 [2024-12-15 10:00:10.236748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.236784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.361 [2024-12-15 10:00:10.236797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:22:21.361 [2024-12-15 10:00:10.236805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.236850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.236859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.361 [2024-12-15 10:00:10.236870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:21.361 [2024-12-15 10:00:10.236878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.236924] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:21.361 [2024-12-15 10:00:10.237050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:21.361 [2024-12-15 10:00:10.237068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.361 [2024-12-15 10:00:10.237079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:21.361 [2024-12-15 10:00:10.237093] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237102] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237116] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:21.361 [2024-12-15 10:00:10.237134] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.361 [2024-12-15 10:00:10.237144] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:21.361 [2024-12-15 10:00:10.237153] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:21.361 [2024-12-15 10:00:10.237163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.237171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.361 [2024-12-15 10:00:10.237181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:22:21.361 [2024-12-15 10:00:10.237189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.237276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.361 [2024-12-15 10:00:10.237293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.361 [2024-12-15 10:00:10.237304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:21.361 [2024-12-15 10:00:10.237313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.361 [2024-12-15 10:00:10.237391] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.361 [2024-12-15 10:00:10.237402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.361 [2024-12-15 10:00:10.237413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.361 [2024-12-15 10:00:10.237439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.361 [2024-12-15 10:00:10.237464] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.361 [2024-12-15 10:00:10.237483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.361 [2024-12-15 10:00:10.237491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:21.361 [2024-12-15 10:00:10.237501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.361 [2024-12-15 10:00:10.237508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.361 [2024-12-15 10:00:10.237517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:21.361 [2024-12-15 10:00:10.237523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.361 [2024-12-15 10:00:10.237541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:21.361 [2024-12-15 10:00:10.237550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:21.361 [2024-12-15 10:00:10.237565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:21.361 [2024-12-15 10:00:10.237572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.361 [2024-12-15 10:00:10.237587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.361 [2024-12-15 10:00:10.237611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.361 [2024-12-15 10:00:10.237632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.361 [2024-12-15 10:00:10.237658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.361 [2024-12-15 10:00:10.237675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.361 [2024-12-15 10:00:10.237682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:21.361 [2024-12-15 10:00:10.237690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.361 [2024-12-15 10:00:10.237697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.362 [2024-12-15 10:00:10.237707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:21.362 [2024-12-15 10:00:10.237714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.362 [2024-12-15 10:00:10.237722] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.362 [2024-12-15 10:00:10.237730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.362 [2024-12-15 10:00:10.237739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.362 [2024-12-15 10:00:10.237747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.362 [2024-12-15 10:00:10.237762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.362 [2024-12-15 10:00:10.237769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.362 [2024-12-15 10:00:10.237778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.362 [2024-12-15 10:00:10.237785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.362 [2024-12-15 10:00:10.237795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.362 [2024-12-15 10:00:10.237802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.362 [2024-12-15 10:00:10.237812] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.362 [2024-12-15 10:00:10.237822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.362 [2024-12-15 10:00:10.237833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:21.362 [2024-12-15 10:00:10.237840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:21.362 [2024-12-15 10:00:10.237849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:21.362 [2024-12-15 10:00:10.237856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:21.362 [2024-12-15 10:00:10.237865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:21.362 [2024-12-15 10:00:10.237872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:21.362 [2024-12-15 10:00:10.237881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:21.362 [2024-12-15 10:00:10.237888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:21.362 [2024-12-15 10:00:10.237897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:21.362 [2024-12-15 10:00:10.237905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:21.362 [2024-12-15 10:00:10.237914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:21.362 [2024-12-15 10:00:10.237922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:21.362 [2024-12-15 10:00:10.237935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:21.362 [2024-12-15 10:00:10.237942] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.362 [2024-12-15 10:00:10.237954] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.362 [2024-12-15 10:00:10.237962] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.362 [2024-12-15 10:00:10.237971] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.362 [2024-12-15 10:00:10.237978] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.362 [2024-12-15 10:00:10.237987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.362 [2024-12-15 10:00:10.237995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.238004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.362 [2024-12-15 10:00:10.238012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:22:21.362 [2024-12-15 10:00:10.238022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.256978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.257030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.362 [2024-12-15 10:00:10.257043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.913 ms 00:22:21.362 [2024-12-15 10:00:10.257054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.257154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.257168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.362 [2024-12-15 10:00:10.257181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:21.362 [2024-12-15 10:00:10.257190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.292620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.292664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.362 [2024-12-15 10:00:10.292675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.377 ms 00:22:21.362 [2024-12-15 10:00:10.292685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.292723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.292735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.362 [2024-12-15 10:00:10.292744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:21.362 [2024-12-15 10:00:10.292755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.293396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.293433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.362 [2024-12-15 10:00:10.293445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:22:21.362 [2024-12-15 10:00:10.293455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.293587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.293601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.362 [2024-12-15 10:00:10.293609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:21.362 [2024-12-15 10:00:10.293619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.312657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.312699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.362 [2024-12-15 10:00:10.312711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.018 ms 00:22:21.362 [2024-12-15 10:00:10.312723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.362 [2024-12-15 10:00:10.326200] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:21.362 [2024-12-15 10:00:10.330068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.362 [2024-12-15 10:00:10.330106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.362 [2024-12-15 10:00:10.330121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.249 ms 00:22:21.362 [2024-12-15 10:00:10.330129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.621 [2024-12-15 10:00:10.417508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.621 [2024-12-15 10:00:10.417547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:21.621 [2024-12-15 10:00:10.417560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.345 ms 00:22:21.621 [2024-12-15 10:00:10.417567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.621 [2024-12-15 10:00:10.417610] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:21.621 [2024-12-15 10:00:10.417621] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:24.921 [2024-12-15 10:00:13.822437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.921 [2024-12-15 10:00:13.822529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:24.921 [2024-12-15 10:00:13.822552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3404.803 ms 00:22:24.921 [2024-12-15 10:00:13.822562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.922 [2024-12-15 10:00:13.822810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.922 [2024-12-15 10:00:13.822824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:24.922 [2024-12-15 10:00:13.822842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:22:24.922 [2024-12-15 10:00:13.822852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.922 [2024-12-15 10:00:13.849971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.922 [2024-12-15 10:00:13.850022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:24.922 [2024-12-15 10:00:13.850040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.061 ms 00:22:24.922 [2024-12-15 10:00:13.850049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.922 [2024-12-15 10:00:13.875876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.922 [2024-12-15 10:00:13.875921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:24.922 [2024-12-15 10:00:13.875942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.768 ms 00:22:24.922 [2024-12-15 10:00:13.875949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.922 [2024-12-15 10:00:13.876321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.922 [2024-12-15 10:00:13.876333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:24.922 [2024-12-15 10:00:13.876345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:24.922 [2024-12-15 10:00:13.876353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:13.951302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:13.951350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:25.183 [2024-12-15 10:00:13.951367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.889 ms 00:22:25.183 [2024-12-15 10:00:13.951375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:13.980077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:13.980126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:25.183 [2024-12-15 10:00:13.980142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.647 ms 00:22:25.183 [2024-12-15 10:00:13.980150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:13.981831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:13.981875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:25.183 [2024-12-15 10:00:13.981891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:22:25.183 [2024-12-15 10:00:13.981901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:14.008406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:14.008450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:25.183 [2024-12-15 10:00:14.008465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.456 ms 00:22:25.183 [2024-12-15 10:00:14.008472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:14.008536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:14.008546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:25.183 [2024-12-15 10:00:14.008560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:25.183 [2024-12-15 10:00:14.008591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:14.008702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.183 [2024-12-15 10:00:14.008713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:25.183 [2024-12-15 10:00:14.008724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:25.183 [2024-12-15 10:00:14.008732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.183 [2024-12-15 10:00:14.010117] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3790.017 ms, result 0 00:22:25.183 { 00:22:25.183 "name": "ftl0", 00:22:25.183 "uuid": "439e8229-b4b3-4098-a525-37f17a62794a" 00:22:25.183 } 00:22:25.183 10:00:14 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:25.183 10:00:14 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:25.444 10:00:14 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:25.444 10:00:14 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:25.444 10:00:14 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:25.444 /dev/nbd0 00:22:25.444 10:00:14 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:25.444 10:00:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:25.444 10:00:14 -- common/autotest_common.sh@867 -- # local i 00:22:25.444 10:00:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:25.444 10:00:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:25.444 10:00:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:25.704 10:00:14 -- common/autotest_common.sh@871 -- # break 00:22:25.705 10:00:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:25.705 10:00:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:25.705 10:00:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:25.705 1+0 records in 00:22:25.705 1+0 records out 00:22:25.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353477 s, 11.6 MB/s 00:22:25.705 10:00:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:25.705 10:00:14 -- common/autotest_common.sh@884 -- # size=4096 00:22:25.705 10:00:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:25.705 10:00:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:25.705 10:00:14 -- common/autotest_common.sh@887 -- # return 0 00:22:25.705 10:00:14 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:25.705 [2024-12-15 10:00:14.539530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:25.705 [2024-12-15 10:00:14.539673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76147 ] 00:22:25.705 [2024-12-15 10:00:14.696630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.966 [2024-12-15 10:00:14.915161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.358  [2024-12-15T10:00:17.313Z] Copying: 193/1024 [MB] (193 MBps) [2024-12-15T10:00:18.247Z] Copying: 391/1024 [MB] (197 MBps) [2024-12-15T10:00:19.182Z] Copying: 648/1024 [MB] (257 MBps) [2024-12-15T10:00:19.748Z] Copying: 902/1024 [MB] (253 MBps) [2024-12-15T10:00:20.687Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:22:31.671 00:22:31.671 10:00:20 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:33.582 10:00:22 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:33.582 [2024-12-15 10:00:22.466030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:33.582 [2024-12-15 10:00:22.466147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76230 ] 00:22:33.843 [2024-12-15 10:00:22.616227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.843 [2024-12-15 10:00:22.812965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.225  [2024-12-15T10:00:25.176Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-15T10:00:26.108Z] Copying: 53/1024 [MB] (33 MBps) [2024-12-15T10:00:27.484Z] Copying: 80/1024 [MB] (27 MBps) [2024-12-15T10:00:28.418Z] Copying: 112/1024 [MB] (31 MBps) [2024-12-15T10:00:29.353Z] Copying: 141/1024 [MB] (29 MBps) [2024-12-15T10:00:30.287Z] Copying: 175/1024 [MB] (33 MBps) [2024-12-15T10:00:31.221Z] Copying: 209/1024 [MB] (34 MBps) [2024-12-15T10:00:32.154Z] Copying: 242/1024 [MB] (32 MBps) [2024-12-15T10:00:33.087Z] Copying: 272/1024 [MB] (30 MBps) [2024-12-15T10:00:34.461Z] Copying: 306/1024 [MB] (33 MBps) [2024-12-15T10:00:35.394Z] Copying: 339/1024 [MB] (32 MBps) [2024-12-15T10:00:36.327Z] Copying: 372/1024 [MB] (33 MBps) [2024-12-15T10:00:37.258Z] Copying: 404/1024 [MB] (32 MBps) [2024-12-15T10:00:38.230Z] Copying: 437/1024 [MB] (32 MBps) [2024-12-15T10:00:39.164Z] Copying: 472/1024 [MB] (34 MBps) [2024-12-15T10:00:40.098Z] Copying: 507/1024 [MB] (35 MBps) [2024-12-15T10:00:41.471Z] Copying: 542/1024 [MB] (34 MBps) [2024-12-15T10:00:42.404Z] Copying: 577/1024 [MB] (35 MBps) [2024-12-15T10:00:43.338Z] Copying: 609/1024 [MB] (31 MBps) [2024-12-15T10:00:44.271Z] Copying: 642/1024 [MB] (32 MBps) [2024-12-15T10:00:45.208Z] Copying: 676/1024 [MB] (34 MBps) [2024-12-15T10:00:46.148Z] Copying: 712/1024 [MB] (35 MBps) [2024-12-15T10:00:47.081Z] Copying: 743/1024 [MB] (30 MBps) [2024-12-15T10:00:48.454Z] Copying: 773/1024 [MB] (30 MBps) [2024-12-15T10:00:49.387Z] Copying: 806/1024 [MB] (32 MBps) [2024-12-15T10:00:50.320Z] Copying: 841/1024 [MB] (35 MBps) [2024-12-15T10:00:51.254Z] Copying: 875/1024 [MB] (34 MBps) [2024-12-15T10:00:52.187Z] Copying: 907/1024 [MB] (31 MBps) [2024-12-15T10:00:53.121Z] Copying: 942/1024 [MB] (35 MBps) [2024-12-15T10:00:54.493Z] Copying: 973/1024 [MB] (30 MBps) [2024-12-15T10:00:54.752Z] Copying: 1007/1024 [MB] (34 MBps) [2024-12-15T10:00:55.319Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:23:06.303 00:23:06.303 10:00:55 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:06.303 10:00:55 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:06.564 10:00:55 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:06.564 [2024-12-15 10:00:55.565149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.564 [2024-12-15 10:00:55.565284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:06.564 [2024-12-15 10:00:55.565303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:06.564 [2024-12-15 10:00:55.565312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.564 [2024-12-15 10:00:55.565334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:06.564 [2024-12-15 10:00:55.567558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.564 [2024-12-15 10:00:55.567581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:06.564 [2024-12-15 10:00:55.567592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:23:06.564 [2024-12-15 10:00:55.567598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.564 [2024-12-15 10:00:55.570229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.564 [2024-12-15 10:00:55.570270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:06.564 [2024-12-15 10:00:55.570284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:23:06.564 [2024-12-15 10:00:55.570290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.586289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.586316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:06.827 [2024-12-15 10:00:55.586326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.981 ms 00:23:06.827 [2024-12-15 10:00:55.586332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.590973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.591004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:06.827 [2024-12-15 10:00:55.591015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:23:06.827 [2024-12-15 10:00:55.591024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.609928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.609954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:06.827 [2024-12-15 10:00:55.609964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.843 ms 00:23:06.827 [2024-12-15 10:00:55.609970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.622647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.622676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:06.827 [2024-12-15 10:00:55.622688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.645 ms 00:23:06.827 [2024-12-15 10:00:55.622695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.622803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.622811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:06.827 [2024-12-15 10:00:55.622820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:06.827 [2024-12-15 10:00:55.622826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.641018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.641126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:06.827 [2024-12-15 10:00:55.641143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:23:06.827 [2024-12-15 10:00:55.641149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.659575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.659672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:06.827 [2024-12-15 10:00:55.659688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:23:06.827 [2024-12-15 10:00:55.659694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.677343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.677368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.827 [2024-12-15 10:00:55.677378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.621 ms 00:23:06.827 [2024-12-15 10:00:55.677384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.695250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.827 [2024-12-15 10:00:55.695291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.827 [2024-12-15 10:00:55.695301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.807 ms 00:23:06.827 [2024-12-15 10:00:55.695307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.827 [2024-12-15 10:00:55.695337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.827 [2024-12-15 10:00:55.695349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.827 [2024-12-15 10:00:55.695497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.695994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.828 [2024-12-15 10:00:55.696051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.828 [2024-12-15 10:00:55.696059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 439e8229-b4b3-4098-a525-37f17a62794a 00:23:06.828 [2024-12-15 10:00:55.696067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:06.828 [2024-12-15 10:00:55.696074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:06.828 [2024-12-15 10:00:55.696080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:06.828 [2024-12-15 10:00:55.696088] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:06.828 [2024-12-15 10:00:55.696094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.828 [2024-12-15 10:00:55.696101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.828 [2024-12-15 10:00:55.696107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.828 [2024-12-15 10:00:55.696113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.828 [2024-12-15 10:00:55.696117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.828 [2024-12-15 10:00:55.696126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.828 [2024-12-15 10:00:55.696131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.828 [2024-12-15 10:00:55.696139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:23:06.828 [2024-12-15 10:00:55.696144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.828 [2024-12-15 10:00:55.706330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.828 [2024-12-15 10:00:55.706429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.828 [2024-12-15 10:00:55.706449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.160 ms 00:23:06.829 [2024-12-15 10:00:55.706456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.706619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.829 [2024-12-15 10:00:55.706627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.829 [2024-12-15 10:00:55.706635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:06.829 [2024-12-15 10:00:55.706641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.743721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.743748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.829 [2024-12-15 10:00:55.743758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.743764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.743814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.743820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.829 [2024-12-15 10:00:55.743828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.743834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.743892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.743900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.829 [2024-12-15 10:00:55.743908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.743915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.743930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.743936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.829 [2024-12-15 10:00:55.743944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.743949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.805590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.805746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.829 [2024-12-15 10:00:55.805763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.805770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.829 [2024-12-15 10:00:55.829609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.829 [2024-12-15 10:00:55.829694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.829 [2024-12-15 10:00:55.829753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.829 [2024-12-15 10:00:55.829857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.829 [2024-12-15 10:00:55.829906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.829947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.829957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.829 [2024-12-15 10:00:55.829965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.829971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.830013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.829 [2024-12-15 10:00:55.830020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.829 [2024-12-15 10:00:55.830028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.829 [2024-12-15 10:00:55.830034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.829 [2024-12-15 10:00:55.830158] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.968 ms, result 0 00:23:06.829 true 00:23:07.090 10:00:55 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75992 00:23:07.090 10:00:55 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75992 00:23:07.090 10:00:55 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:07.090 [2024-12-15 10:00:55.915732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:07.090 [2024-12-15 10:00:55.915987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76582 ] 00:23:07.090 [2024-12-15 10:00:56.064789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.351 [2024-12-15 10:00:56.240577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.738  [2024-12-15T10:00:58.698Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-15T10:00:59.639Z] Copying: 512/1024 [MB] (258 MBps) [2024-12-15T10:01:00.572Z] Copying: 710/1024 [MB] (198 MBps) [2024-12-15T10:01:00.830Z] Copying: 968/1024 [MB] (258 MBps) [2024-12-15T10:01:01.397Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:23:12.381 00:23:12.381 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75992 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:12.381 10:01:01 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.381 [2024-12-15 10:01:01.343320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:12.381 [2024-12-15 10:01:01.343440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76646 ] 00:23:12.640 [2024-12-15 10:01:01.494827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.640 [2024-12-15 10:01:01.638831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.898 [2024-12-15 10:01:01.845078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:12.898 [2024-12-15 10:01:01.845128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:12.898 [2024-12-15 10:01:01.904720] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:12.898 [2024-12-15 10:01:01.904992] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:12.898 [2024-12-15 10:01:01.905161] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:13.158 [2024-12-15 10:01:02.078685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.078721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:13.158 [2024-12-15 10:01:02.078731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.158 [2024-12-15 10:01:02.078737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.078771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.078779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.158 [2024-12-15 10:01:02.078787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:13.158 [2024-12-15 10:01:02.078793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.078805] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:13.158 [2024-12-15 10:01:02.079375] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:13.158 [2024-12-15 10:01:02.079389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.079395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.158 [2024-12-15 10:01:02.079401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:23:13.158 [2024-12-15 10:01:02.079407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.080340] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:13.158 [2024-12-15 10:01:02.090126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.090154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:13.158 [2024-12-15 10:01:02.090163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.787 ms 00:23:13.158 [2024-12-15 10:01:02.090169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.090209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.090218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:13.158 [2024-12-15 10:01:02.090225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:13.158 [2024-12-15 10:01:02.090231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.094663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.094788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.158 [2024-12-15 10:01:02.094801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.382 ms 00:23:13.158 [2024-12-15 10:01:02.094807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.094874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.094881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.158 [2024-12-15 10:01:02.094891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:13.158 [2024-12-15 10:01:02.094896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.094933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.094940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:13.158 [2024-12-15 10:01:02.094946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:13.158 [2024-12-15 10:01:02.094952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.094972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:13.158 [2024-12-15 10:01:02.097755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.097777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.158 [2024-12-15 10:01:02.097784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:23:13.158 [2024-12-15 10:01:02.097790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.097816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.097822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:13.158 [2024-12-15 10:01:02.097828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:13.158 [2024-12-15 10:01:02.097833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.097849] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:13.158 [2024-12-15 10:01:02.097862] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:13.158 [2024-12-15 10:01:02.097887] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:13.158 [2024-12-15 10:01:02.097901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:13.158 [2024-12-15 10:01:02.097957] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:13.158 [2024-12-15 10:01:02.097965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:13.158 [2024-12-15 10:01:02.097972] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:13.158 [2024-12-15 10:01:02.097979] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:13.158 [2024-12-15 10:01:02.097986] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:13.158 [2024-12-15 10:01:02.097991] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:13.158 [2024-12-15 10:01:02.097997] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:13.158 [2024-12-15 10:01:02.098002] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:13.158 [2024-12-15 10:01:02.098008] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:13.158 [2024-12-15 10:01:02.098015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.098021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:13.158 [2024-12-15 10:01:02.098027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:23:13.158 [2024-12-15 10:01:02.098032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.098077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.158 [2024-12-15 10:01:02.098083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:13.158 [2024-12-15 10:01:02.098089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:13.158 [2024-12-15 10:01:02.098094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.158 [2024-12-15 10:01:02.098147] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:13.158 [2024-12-15 10:01:02.098154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:13.158 [2024-12-15 10:01:02.098161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.158 [2024-12-15 10:01:02.098167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.158 [2024-12-15 10:01:02.098172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:13.158 [2024-12-15 10:01:02.098177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:13.158 [2024-12-15 10:01:02.098182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:13.158 [2024-12-15 10:01:02.098187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:13.158 [2024-12-15 10:01:02.098192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:13.158 [2024-12-15 10:01:02.098197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.158 [2024-12-15 10:01:02.098202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:13.158 [2024-12-15 10:01:02.098208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:13.158 [2024-12-15 10:01:02.098217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.158 [2024-12-15 10:01:02.098223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:13.158 [2024-12-15 10:01:02.098228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:13.158 [2024-12-15 10:01:02.098232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.158 [2024-12-15 10:01:02.098237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:13.158 [2024-12-15 10:01:02.098242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:13.158 [2024-12-15 10:01:02.098246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:13.159 [2024-12-15 10:01:02.098398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:13.159 [2024-12-15 10:01:02.098414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:13.159 [2024-12-15 10:01:02.098441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:13.159 [2024-12-15 10:01:02.098483] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:13.159 [2024-12-15 10:01:02.098563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:13.159 [2024-12-15 10:01:02.098605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:13.159 [2024-12-15 10:01:02.098647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.159 [2024-12-15 10:01:02.098674] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:13.159 [2024-12-15 10:01:02.098714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:13.159 [2024-12-15 10:01:02.098730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.159 [2024-12-15 10:01:02.098744] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:13.159 [2024-12-15 10:01:02.098758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:13.159 [2024-12-15 10:01:02.098772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.159 [2024-12-15 10:01:02.098802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:13.159 [2024-12-15 10:01:02.098817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:13.159 [2024-12-15 10:01:02.098830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:13.159 [2024-12-15 10:01:02.098844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:13.159 [2024-12-15 10:01:02.098880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:13.159 [2024-12-15 10:01:02.098896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:13.159 [2024-12-15 10:01:02.098911] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:13.159 [2024-12-15 10:01:02.098935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.159 [2024-12-15 10:01:02.098958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:13.159 [2024-12-15 10:01:02.098979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:13.159 [2024-12-15 10:01:02.099000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:13.159 [2024-12-15 10:01:02.099022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:13.159 [2024-12-15 10:01:02.099068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:13.159 [2024-12-15 10:01:02.099075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:13.159 [2024-12-15 10:01:02.099080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:13.159 [2024-12-15 10:01:02.099086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:13.159 [2024-12-15 10:01:02.099091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:13.159 [2024-12-15 10:01:02.099097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:13.159 [2024-12-15 10:01:02.099102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:13.159 [2024-12-15 10:01:02.099108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:13.159 [2024-12-15 10:01:02.099114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:13.159 [2024-12-15 10:01:02.099119] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:13.159 [2024-12-15 10:01:02.099125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.159 [2024-12-15 10:01:02.099135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:13.159 [2024-12-15 10:01:02.099140] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:13.159 [2024-12-15 10:01:02.099147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:13.159 [2024-12-15 10:01:02.099152] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:13.159 [2024-12-15 10:01:02.099159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.099164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:13.159 [2024-12-15 10:01:02.099170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:23:13.159 [2024-12-15 10:01:02.099176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.111067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.111098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.159 [2024-12-15 10:01:02.111107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.847 ms 00:23:13.159 [2024-12-15 10:01:02.111112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.111177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.111183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.159 [2024-12-15 10:01:02.111189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:13.159 [2024-12-15 10:01:02.111195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.154431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.154463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.159 [2024-12-15 10:01:02.154473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.201 ms 00:23:13.159 [2024-12-15 10:01:02.154481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.154515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.154523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.159 [2024-12-15 10:01:02.154530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:13.159 [2024-12-15 10:01:02.154539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.154849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.154868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.159 [2024-12-15 10:01:02.154875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:23:13.159 [2024-12-15 10:01:02.154881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.154968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.154979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.159 [2024-12-15 10:01:02.154986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:13.159 [2024-12-15 10:01:02.154992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.159 [2024-12-15 10:01:02.166076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.159 [2024-12-15 10:01:02.166103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.159 [2024-12-15 10:01:02.166111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.068 ms 00:23:13.159 [2024-12-15 10:01:02.166117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.176107] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:13.418 [2024-12-15 10:01:02.176134] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:13.418 [2024-12-15 10:01:02.176142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.176149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:13.418 [2024-12-15 10:01:02.176156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.955 ms 00:23:13.418 [2024-12-15 10:01:02.176161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.194756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.194783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:13.418 [2024-12-15 10:01:02.194795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.564 ms 00:23:13.418 [2024-12-15 10:01:02.194802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.204045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.204073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:13.418 [2024-12-15 10:01:02.204081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.211 ms 00:23:13.418 [2024-12-15 10:01:02.204093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.214828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.214855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:13.418 [2024-12-15 10:01:02.214863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.056 ms 00:23:13.418 [2024-12-15 10:01:02.214869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.215152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.215162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:13.418 [2024-12-15 10:01:02.215169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:23:13.418 [2024-12-15 10:01:02.215175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.261287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.261329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:13.418 [2024-12-15 10:01:02.261340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.097 ms 00:23:13.418 [2024-12-15 10:01:02.261347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.270276] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:13.418 [2024-12-15 10:01:02.272366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.272481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.418 [2024-12-15 10:01:02.272497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.969 ms 00:23:13.418 [2024-12-15 10:01:02.272505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.272580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.272589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:13.418 [2024-12-15 10:01:02.272596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:13.418 [2024-12-15 10:01:02.272603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.272656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.272665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:13.418 [2024-12-15 10:01:02.272671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:13.418 [2024-12-15 10:01:02.272677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.273611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.273633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:13.418 [2024-12-15 10:01:02.273640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:23:13.418 [2024-12-15 10:01:02.273649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.273674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.273681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:13.418 [2024-12-15 10:01:02.273689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:13.418 [2024-12-15 10:01:02.273695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.273721] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:13.418 [2024-12-15 10:01:02.273728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.273734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:13.418 [2024-12-15 10:01:02.273741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:13.418 [2024-12-15 10:01:02.273746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.293193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.293224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:13.418 [2024-12-15 10:01:02.293233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.433 ms 00:23:13.418 [2024-12-15 10:01:02.293239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.293300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.418 [2024-12-15 10:01:02.293308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:13.418 [2024-12-15 10:01:02.293314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:13.418 [2024-12-15 10:01:02.293320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.418 [2024-12-15 10:01:02.294200] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.175 ms, result 0 00:23:14.359  [2024-12-15T10:01:04.321Z] Copying: 34/1024 [MB] (34 MBps) [2024-12-15T10:01:05.743Z] Copying: 50/1024 [MB] (16 MBps) [2024-12-15T10:01:06.317Z] Copying: 68/1024 [MB] (17 MBps) [2024-12-15T10:01:07.705Z] Copying: 80/1024 [MB] (11 MBps) [2024-12-15T10:01:08.649Z] Copying: 97/1024 [MB] (16 MBps) [2024-12-15T10:01:09.591Z] Copying: 110/1024 [MB] (13 MBps) [2024-12-15T10:01:10.536Z] Copying: 128/1024 [MB] (17 MBps) [2024-12-15T10:01:11.478Z] Copying: 143/1024 [MB] (15 MBps) [2024-12-15T10:01:12.421Z] Copying: 156/1024 [MB] (13 MBps) [2024-12-15T10:01:13.363Z] Copying: 174/1024 [MB] (17 MBps) [2024-12-15T10:01:14.746Z] Copying: 192/1024 [MB] (17 MBps) [2024-12-15T10:01:15.319Z] Copying: 211/1024 [MB] (19 MBps) [2024-12-15T10:01:16.708Z] Copying: 240/1024 [MB] (28 MBps) [2024-12-15T10:01:17.654Z] Copying: 251/1024 [MB] (11 MBps) [2024-12-15T10:01:18.600Z] Copying: 267744/1048576 [kB] (10236 kBps) [2024-12-15T10:01:19.546Z] Copying: 271/1024 [MB] (10 MBps) [2024-12-15T10:01:20.489Z] Copying: 282/1024 [MB] (10 MBps) [2024-12-15T10:01:21.433Z] Copying: 294/1024 [MB] (11 MBps) [2024-12-15T10:01:22.378Z] Copying: 306/1024 [MB] (12 MBps) [2024-12-15T10:01:23.317Z] Copying: 322/1024 [MB] (15 MBps) [2024-12-15T10:01:24.700Z] Copying: 357/1024 [MB] (35 MBps) [2024-12-15T10:01:25.643Z] Copying: 384/1024 [MB] (27 MBps) [2024-12-15T10:01:26.583Z] Copying: 398/1024 [MB] (13 MBps) [2024-12-15T10:01:27.526Z] Copying: 415/1024 [MB] (17 MBps) [2024-12-15T10:01:28.469Z] Copying: 437/1024 [MB] (21 MBps) [2024-12-15T10:01:29.412Z] Copying: 459/1024 [MB] (22 MBps) [2024-12-15T10:01:30.353Z] Copying: 479/1024 [MB] (19 MBps) [2024-12-15T10:01:31.738Z] Copying: 501/1024 [MB] (21 MBps) [2024-12-15T10:01:32.310Z] Copying: 520/1024 [MB] (19 MBps) [2024-12-15T10:01:33.696Z] Copying: 539/1024 [MB] (19 MBps) [2024-12-15T10:01:34.655Z] Copying: 561/1024 [MB] (21 MBps) [2024-12-15T10:01:35.638Z] Copying: 580/1024 [MB] (19 MBps) [2024-12-15T10:01:36.580Z] Copying: 599/1024 [MB] (18 MBps) [2024-12-15T10:01:37.521Z] Copying: 614/1024 [MB] (15 MBps) [2024-12-15T10:01:38.462Z] Copying: 628/1024 [MB] (13 MBps) [2024-12-15T10:01:39.404Z] Copying: 646/1024 [MB] (18 MBps) [2024-12-15T10:01:40.349Z] Copying: 656/1024 [MB] (10 MBps) [2024-12-15T10:01:41.735Z] Copying: 671/1024 [MB] (15 MBps) [2024-12-15T10:01:42.680Z] Copying: 698080/1048576 [kB] (10216 kBps) [2024-12-15T10:01:43.624Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-15T10:01:44.566Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-15T10:01:45.509Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-15T10:01:46.454Z] Copying: 725/1024 [MB] (13 MBps) [2024-12-15T10:01:47.397Z] Copying: 738/1024 [MB] (12 MBps) [2024-12-15T10:01:48.338Z] Copying: 766432/1048576 [kB] (10056 kBps) [2024-12-15T10:01:49.724Z] Copying: 761/1024 [MB] (12 MBps) [2024-12-15T10:01:50.668Z] Copying: 771/1024 [MB] (10 MBps) [2024-12-15T10:01:51.611Z] Copying: 799992/1048576 [kB] (10232 kBps) [2024-12-15T10:01:52.554Z] Copying: 810008/1048576 [kB] (10016 kBps) [2024-12-15T10:01:53.497Z] Copying: 801/1024 [MB] (10 MBps) [2024-12-15T10:01:54.440Z] Copying: 811/1024 [MB] (10 MBps) [2024-12-15T10:01:55.382Z] Copying: 821/1024 [MB] (10 MBps) [2024-12-15T10:01:56.319Z] Copying: 851552/1048576 [kB] (10232 kBps) [2024-12-15T10:01:57.705Z] Copying: 851/1024 [MB] (19 MBps) [2024-12-15T10:01:58.646Z] Copying: 861/1024 [MB] (10 MBps) [2024-12-15T10:01:59.590Z] Copying: 874/1024 [MB] (12 MBps) [2024-12-15T10:02:00.527Z] Copying: 884/1024 [MB] (10 MBps) [2024-12-15T10:02:01.464Z] Copying: 902/1024 [MB] (18 MBps) [2024-12-15T10:02:02.406Z] Copying: 926/1024 [MB] (23 MBps) [2024-12-15T10:02:03.339Z] Copying: 937/1024 [MB] (10 MBps) [2024-12-15T10:02:04.315Z] Copying: 961/1024 [MB] (24 MBps) [2024-12-15T10:02:05.700Z] Copying: 985/1024 [MB] (24 MBps) [2024-12-15T10:02:06.633Z] Copying: 1019776/1048576 [kB] (10208 kBps) [2024-12-15T10:02:07.575Z] Copying: 1019/1024 [MB] (23 MBps) [2024-12-15T10:02:07.575Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-15 10:02:07.260582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.260673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:18.559 [2024-12-15 10:02:07.260691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:18.559 [2024-12-15 10:02:07.260700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.260834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:18.559 [2024-12-15 10:02:07.263687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.263868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:18.559 [2024-12-15 10:02:07.263897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:24:18.559 [2024-12-15 10:02:07.263906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.275824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.275873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:18.559 [2024-12-15 10:02:07.275895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.870 ms 00:24:18.559 [2024-12-15 10:02:07.275904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.298395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.298555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:18.559 [2024-12-15 10:02:07.298576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.471 ms 00:24:18.559 [2024-12-15 10:02:07.298585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.309913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.310156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:18.559 [2024-12-15 10:02:07.310185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.921 ms 00:24:18.559 [2024-12-15 10:02:07.310196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.338618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.338675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:18.559 [2024-12-15 10:02:07.338689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.288 ms 00:24:18.559 [2024-12-15 10:02:07.338697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.355566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.355619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:18.559 [2024-12-15 10:02:07.355633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.817 ms 00:24:18.559 [2024-12-15 10:02:07.355641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.425199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.425376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:18.559 [2024-12-15 10:02:07.425395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.502 ms 00:24:18.559 [2024-12-15 10:02:07.425404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.450709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.450752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:18.559 [2024-12-15 10:02:07.450764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.279 ms 00:24:18.559 [2024-12-15 10:02:07.450771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.476238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.476295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:18.559 [2024-12-15 10:02:07.476308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.424 ms 00:24:18.559 [2024-12-15 10:02:07.476316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.502215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.502284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:18.559 [2024-12-15 10:02:07.502297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.851 ms 00:24:18.559 [2024-12-15 10:02:07.502305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.527968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.559 [2024-12-15 10:02:07.528155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:18.559 [2024-12-15 10:02:07.528178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.556 ms 00:24:18.559 [2024-12-15 10:02:07.528185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.559 [2024-12-15 10:02:07.528224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:18.559 [2024-12-15 10:02:07.528239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109312 / 261120 wr_cnt: 1 state: open 00:24:18.559 [2024-12-15 10:02:07.528274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:18.559 [2024-12-15 10:02:07.528481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.528997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:18.560 [2024-12-15 10:02:07.529096] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:18.560 [2024-12-15 10:02:07.529108] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 439e8229-b4b3-4098-a525-37f17a62794a 00:24:18.560 [2024-12-15 10:02:07.529117] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109312 00:24:18.560 [2024-12-15 10:02:07.529125] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110272 00:24:18.560 [2024-12-15 10:02:07.529132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109312 00:24:18.560 [2024-12-15 10:02:07.529147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:24:18.560 [2024-12-15 10:02:07.529155] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:18.560 [2024-12-15 10:02:07.529164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:18.560 [2024-12-15 10:02:07.529172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:18.560 [2024-12-15 10:02:07.529178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:18.560 [2024-12-15 10:02:07.529185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:18.560 [2024-12-15 10:02:07.529194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.560 [2024-12-15 10:02:07.529202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:18.560 [2024-12-15 10:02:07.529211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:24:18.560 [2024-12-15 10:02:07.529219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.560 [2024-12-15 10:02:07.543344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.560 [2024-12-15 10:02:07.543516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:18.560 [2024-12-15 10:02:07.543573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.058 ms 00:24:18.560 [2024-12-15 10:02:07.543596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.560 [2024-12-15 10:02:07.543827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.560 [2024-12-15 10:02:07.543973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:18.560 [2024-12-15 10:02:07.544048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:24:18.560 [2024-12-15 10:02:07.544070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.583636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.583821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.822 [2024-12-15 10:02:07.583884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.583906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.583988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.584012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.822 [2024-12-15 10:02:07.584040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.584059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.584152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.584244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.822 [2024-12-15 10:02:07.584297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.584318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.584351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.584372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.822 [2024-12-15 10:02:07.584392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.584421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.665374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.665582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.822 [2024-12-15 10:02:07.665645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.665667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.698295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.698482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:18.822 [2024-12-15 10:02:07.698553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.698577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.698661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.698686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:18.822 [2024-12-15 10:02:07.698707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.698726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.822 [2024-12-15 10:02:07.698779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.822 [2024-12-15 10:02:07.698858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:18.822 [2024-12-15 10:02:07.698881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.822 [2024-12-15 10:02:07.698903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.823 [2024-12-15 10:02:07.699032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.823 [2024-12-15 10:02:07.699059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:18.823 [2024-12-15 10:02:07.699079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.823 [2024-12-15 10:02:07.699098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.823 [2024-12-15 10:02:07.699197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.823 [2024-12-15 10:02:07.699224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:18.823 [2024-12-15 10:02:07.699244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.823 [2024-12-15 10:02:07.699307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.823 [2024-12-15 10:02:07.699369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.823 [2024-12-15 10:02:07.699907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:18.823 [2024-12-15 10:02:07.699969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.823 [2024-12-15 10:02:07.699993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.823 [2024-12-15 10:02:07.700086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:18.823 [2024-12-15 10:02:07.700114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:18.823 [2024-12-15 10:02:07.700134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:18.823 [2024-12-15 10:02:07.700154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.823 [2024-12-15 10:02:07.700331] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 441.302 ms, result 0 00:24:20.210 00:24:20.210 00:24:20.210 10:02:09 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:22.759 10:02:11 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:22.759 [2024-12-15 10:02:11.401165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:22.759 [2024-12-15 10:02:11.401408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77368 ] 00:24:22.759 [2024-12-15 10:02:11.546274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.759 [2024-12-15 10:02:11.745117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.021 [2024-12-15 10:02:12.034103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.021 [2024-12-15 10:02:12.034184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.283 [2024-12-15 10:02:12.189771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.189833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:23.283 [2024-12-15 10:02:12.189848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:23.283 [2024-12-15 10:02:12.189859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.189914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.189925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:23.283 [2024-12-15 10:02:12.189933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:23.283 [2024-12-15 10:02:12.189941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.189961] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:23.283 [2024-12-15 10:02:12.190727] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:23.283 [2024-12-15 10:02:12.190747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.190757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:23.283 [2024-12-15 10:02:12.190766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:24:23.283 [2024-12-15 10:02:12.190774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.192502] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:23.283 [2024-12-15 10:02:12.206652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.206697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:23.283 [2024-12-15 10:02:12.206710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 00:24:23.283 [2024-12-15 10:02:12.206718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.206791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.206801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:23.283 [2024-12-15 10:02:12.206810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:23.283 [2024-12-15 10:02:12.206818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.214901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.215081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:23.283 [2024-12-15 10:02:12.215100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.004 ms 00:24:23.283 [2024-12-15 10:02:12.215109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.215208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.215217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:23.283 [2024-12-15 10:02:12.215226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:23.283 [2024-12-15 10:02:12.215234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.215312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.215323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:23.283 [2024-12-15 10:02:12.215331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:23.283 [2024-12-15 10:02:12.215340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.215371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:23.283 [2024-12-15 10:02:12.219466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.219504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:23.283 [2024-12-15 10:02:12.219514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:24:23.283 [2024-12-15 10:02:12.219522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.219560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.219569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:23.283 [2024-12-15 10:02:12.219577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:23.283 [2024-12-15 10:02:12.219587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.219638] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:23.283 [2024-12-15 10:02:12.219661] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:23.283 [2024-12-15 10:02:12.219696] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:23.283 [2024-12-15 10:02:12.219712] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:23.283 [2024-12-15 10:02:12.219787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:23.283 [2024-12-15 10:02:12.219798] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:23.283 [2024-12-15 10:02:12.219811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:23.283 [2024-12-15 10:02:12.219822] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:23.283 [2024-12-15 10:02:12.219831] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:23.283 [2024-12-15 10:02:12.219839] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:23.283 [2024-12-15 10:02:12.219847] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:23.283 [2024-12-15 10:02:12.219855] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:23.283 [2024-12-15 10:02:12.219861] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:23.283 [2024-12-15 10:02:12.219870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.219878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:23.283 [2024-12-15 10:02:12.219886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:24:23.283 [2024-12-15 10:02:12.219893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.219955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.283 [2024-12-15 10:02:12.219965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:23.283 [2024-12-15 10:02:12.219973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:23.283 [2024-12-15 10:02:12.219980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.283 [2024-12-15 10:02:12.220051] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:23.283 [2024-12-15 10:02:12.220061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:23.283 [2024-12-15 10:02:12.220070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:23.283 [2024-12-15 10:02:12.220094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:23.283 [2024-12-15 10:02:12.220117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:23.283 [2024-12-15 10:02:12.220132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:23.283 [2024-12-15 10:02:12.220141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:23.283 [2024-12-15 10:02:12.220149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:23.283 [2024-12-15 10:02:12.220156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:23.283 [2024-12-15 10:02:12.220163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:23.283 [2024-12-15 10:02:12.220170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:23.283 [2024-12-15 10:02:12.220190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:23.283 [2024-12-15 10:02:12.220197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:23.283 [2024-12-15 10:02:12.220210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:23.283 [2024-12-15 10:02:12.220217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:23.283 [2024-12-15 10:02:12.220230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:23.283 [2024-12-15 10:02:12.220249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:23.283 [2024-12-15 10:02:12.220298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:23.283 [2024-12-15 10:02:12.220319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:23.283 [2024-12-15 10:02:12.220332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:23.283 [2024-12-15 10:02:12.220338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:23.283 [2024-12-15 10:02:12.220345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:23.283 [2024-12-15 10:02:12.220352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:23.283 [2024-12-15 10:02:12.220359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:23.283 [2024-12-15 10:02:12.220365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:23.283 [2024-12-15 10:02:12.220371] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:23.283 [2024-12-15 10:02:12.220382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:23.284 [2024-12-15 10:02:12.220389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:23.284 [2024-12-15 10:02:12.220398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.284 [2024-12-15 10:02:12.220407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:23.284 [2024-12-15 10:02:12.220415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:23.284 [2024-12-15 10:02:12.220422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:23.284 [2024-12-15 10:02:12.220429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:23.284 [2024-12-15 10:02:12.220436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:23.284 [2024-12-15 10:02:12.220443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:23.284 [2024-12-15 10:02:12.220451] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:23.284 [2024-12-15 10:02:12.220460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:23.284 [2024-12-15 10:02:12.220469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:23.284 [2024-12-15 10:02:12.220477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:23.284 [2024-12-15 10:02:12.220484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:23.284 [2024-12-15 10:02:12.220499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:23.284 [2024-12-15 10:02:12.220507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:23.284 [2024-12-15 10:02:12.220514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:23.284 [2024-12-15 10:02:12.220523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:23.284 [2024-12-15 10:02:12.220530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:23.284 [2024-12-15 10:02:12.220537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:23.284 [2024-12-15 10:02:12.220544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:23.284 [2024-12-15 10:02:12.220551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:23.284 [2024-12-15 10:02:12.220558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:23.284 [2024-12-15 10:02:12.220566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:23.284 [2024-12-15 10:02:12.220573] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:23.284 [2024-12-15 10:02:12.220581] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:23.284 [2024-12-15 10:02:12.220589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:23.284 [2024-12-15 10:02:12.220596] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:23.284 [2024-12-15 10:02:12.220629] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:23.284 [2024-12-15 10:02:12.220637] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:23.284 [2024-12-15 10:02:12.220645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.220653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:23.284 [2024-12-15 10:02:12.220661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:24:23.284 [2024-12-15 10:02:12.220668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.238561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.238609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:23.284 [2024-12-15 10:02:12.238621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.849 ms 00:24:23.284 [2024-12-15 10:02:12.238636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.238729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.238739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:23.284 [2024-12-15 10:02:12.238749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:23.284 [2024-12-15 10:02:12.238758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.283803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.283857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.284 [2024-12-15 10:02:12.283870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.991 ms 00:24:23.284 [2024-12-15 10:02:12.283879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.283926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.283936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.284 [2024-12-15 10:02:12.283945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:23.284 [2024-12-15 10:02:12.283953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.284515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.284547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.284 [2024-12-15 10:02:12.284558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:24:23.284 [2024-12-15 10:02:12.284572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.284 [2024-12-15 10:02:12.284712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.284 [2024-12-15 10:02:12.284731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.284 [2024-12-15 10:02:12.284739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:23.284 [2024-12-15 10:02:12.284747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.299854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.299885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.546 [2024-12-15 10:02:12.299895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:24:23.546 [2024-12-15 10:02:12.299903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.312819] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:23.546 [2024-12-15 10:02:12.312854] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:23.546 [2024-12-15 10:02:12.312864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.312872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:23.546 [2024-12-15 10:02:12.312881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.869 ms 00:24:23.546 [2024-12-15 10:02:12.312887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.337468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.337500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:23.546 [2024-12-15 10:02:12.337510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:24:23.546 [2024-12-15 10:02:12.337518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.349328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.349365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:23.546 [2024-12-15 10:02:12.349374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.788 ms 00:24:23.546 [2024-12-15 10:02:12.349381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.361141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.361272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:23.546 [2024-12-15 10:02:12.361295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.729 ms 00:24:23.546 [2024-12-15 10:02:12.361301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.361643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.361654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:23.546 [2024-12-15 10:02:12.361662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:23.546 [2024-12-15 10:02:12.361669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.546 [2024-12-15 10:02:12.419313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.546 [2024-12-15 10:02:12.419350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:23.546 [2024-12-15 10:02:12.419362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.627 ms 00:24:23.546 [2024-12-15 10:02:12.419370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.429935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:23.547 [2024-12-15 10:02:12.432126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.432154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:23.547 [2024-12-15 10:02:12.432165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.719 ms 00:24:23.547 [2024-12-15 10:02:12.432177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.432231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.432241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:23.547 [2024-12-15 10:02:12.432250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:23.547 [2024-12-15 10:02:12.432275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.433283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.433310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:23.547 [2024-12-15 10:02:12.433319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:24:23.547 [2024-12-15 10:02:12.433327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.434451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.434475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:23.547 [2024-12-15 10:02:12.434484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:24:23.547 [2024-12-15 10:02:12.434490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.434515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.434523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:23.547 [2024-12-15 10:02:12.434535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:23.547 [2024-12-15 10:02:12.434542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.434571] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:23.547 [2024-12-15 10:02:12.434579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.434588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:23.547 [2024-12-15 10:02:12.434596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:23.547 [2024-12-15 10:02:12.434603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.458265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.458303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:23.547 [2024-12-15 10:02:12.458315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.644 ms 00:24:23.547 [2024-12-15 10:02:12.458323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.458397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.547 [2024-12-15 10:02:12.458407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:23.547 [2024-12-15 10:02:12.458416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:23.547 [2024-12-15 10:02:12.458424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.547 [2024-12-15 10:02:12.466163] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.687 ms, result 0 00:24:24.933  [2024-12-15T10:02:14.893Z] Copying: 1156/1048576 [kB] (1156 kBps) [2024-12-15T10:02:15.837Z] Copying: 4684/1048576 [kB] (3528 kBps) [2024-12-15T10:02:16.782Z] Copying: 20/1024 [MB] (16 MBps) [2024-12-15T10:02:17.725Z] Copying: 53/1024 [MB] (32 MBps) [2024-12-15T10:02:18.669Z] Copying: 85/1024 [MB] (31 MBps) [2024-12-15T10:02:20.052Z] Copying: 113/1024 [MB] (27 MBps) [2024-12-15T10:02:20.994Z] Copying: 140/1024 [MB] (26 MBps) [2024-12-15T10:02:21.936Z] Copying: 166/1024 [MB] (26 MBps) [2024-12-15T10:02:22.881Z] Copying: 194/1024 [MB] (27 MBps) [2024-12-15T10:02:23.826Z] Copying: 222/1024 [MB] (27 MBps) [2024-12-15T10:02:24.769Z] Copying: 251/1024 [MB] (29 MBps) [2024-12-15T10:02:25.707Z] Copying: 278/1024 [MB] (26 MBps) [2024-12-15T10:02:26.648Z] Copying: 307/1024 [MB] (29 MBps) [2024-12-15T10:02:28.035Z] Copying: 335/1024 [MB] (27 MBps) [2024-12-15T10:02:28.975Z] Copying: 362/1024 [MB] (27 MBps) [2024-12-15T10:02:29.918Z] Copying: 389/1024 [MB] (27 MBps) [2024-12-15T10:02:30.862Z] Copying: 423/1024 [MB] (34 MBps) [2024-12-15T10:02:31.805Z] Copying: 448/1024 [MB] (24 MBps) [2024-12-15T10:02:32.798Z] Copying: 473/1024 [MB] (25 MBps) [2024-12-15T10:02:33.738Z] Copying: 499/1024 [MB] (26 MBps) [2024-12-15T10:02:34.681Z] Copying: 521/1024 [MB] (22 MBps) [2024-12-15T10:02:36.067Z] Copying: 547/1024 [MB] (25 MBps) [2024-12-15T10:02:36.638Z] Copying: 574/1024 [MB] (27 MBps) [2024-12-15T10:02:38.023Z] Copying: 603/1024 [MB] (29 MBps) [2024-12-15T10:02:38.962Z] Copying: 629/1024 [MB] (25 MBps) [2024-12-15T10:02:39.902Z] Copying: 651/1024 [MB] (22 MBps) [2024-12-15T10:02:40.843Z] Copying: 678/1024 [MB] (26 MBps) [2024-12-15T10:02:41.782Z] Copying: 702/1024 [MB] (24 MBps) [2024-12-15T10:02:42.725Z] Copying: 731/1024 [MB] (29 MBps) [2024-12-15T10:02:43.671Z] Copying: 760/1024 [MB] (28 MBps) [2024-12-15T10:02:45.060Z] Copying: 788/1024 [MB] (28 MBps) [2024-12-15T10:02:46.003Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-15T10:02:46.948Z] Copying: 838/1024 [MB] (28 MBps) [2024-12-15T10:02:47.894Z] Copying: 864/1024 [MB] (25 MBps) [2024-12-15T10:02:48.839Z] Copying: 886/1024 [MB] (22 MBps) [2024-12-15T10:02:49.782Z] Copying: 915/1024 [MB] (28 MBps) [2024-12-15T10:02:50.727Z] Copying: 931/1024 [MB] (15 MBps) [2024-12-15T10:02:51.672Z] Copying: 948/1024 [MB] (17 MBps) [2024-12-15T10:02:53.053Z] Copying: 973/1024 [MB] (24 MBps) [2024-12-15T10:02:53.625Z] Copying: 1002/1024 [MB] (29 MBps) [2024-12-15T10:02:53.625Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-15 10:02:53.475292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.475361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:04.609 [2024-12-15 10:02:53.475378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:04.609 [2024-12-15 10:02:53.475387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.475409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.609 [2024-12-15 10:02:53.478745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.478900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:04.609 [2024-12-15 10:02:53.478971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.319 ms 00:25:04.609 [2024-12-15 10:02:53.478997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.479282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.479478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:04.609 [2024-12-15 10:02:53.479505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:25:04.609 [2024-12-15 10:02:53.479525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.491621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.491784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:04.609 [2024-12-15 10:02:53.491890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.065 ms 00:25:04.609 [2024-12-15 10:02:53.491904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.498030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.498185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:04.609 [2024-12-15 10:02:53.498203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.091 ms 00:25:04.609 [2024-12-15 10:02:53.498211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.525210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.525393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:04.609 [2024-12-15 10:02:53.525415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.916 ms 00:25:04.609 [2024-12-15 10:02:53.525422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.542100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.542145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:04.609 [2024-12-15 10:02:53.542156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.586 ms 00:25:04.609 [2024-12-15 10:02:53.542164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.550570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.550615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:04.609 [2024-12-15 10:02:53.550633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.357 ms 00:25:04.609 [2024-12-15 10:02:53.550641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.576571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.576752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:04.609 [2024-12-15 10:02:53.576773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.915 ms 00:25:04.609 [2024-12-15 10:02:53.576781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.609 [2024-12-15 10:02:53.602262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.609 [2024-12-15 10:02:53.602305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:04.609 [2024-12-15 10:02:53.602317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.447 ms 00:25:04.609 [2024-12-15 10:02:53.602336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.871 [2024-12-15 10:02:53.627214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.871 [2024-12-15 10:02:53.627272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:04.871 [2024-12-15 10:02:53.627284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.837 ms 00:25:04.871 [2024-12-15 10:02:53.627292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.871 [2024-12-15 10:02:53.652611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.871 [2024-12-15 10:02:53.652662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:04.871 [2024-12-15 10:02:53.652674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.236 ms 00:25:04.871 [2024-12-15 10:02:53.652681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.871 [2024-12-15 10:02:53.652723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:04.871 [2024-12-15 10:02:53.652738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:04.871 [2024-12-15 10:02:53.652748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:25:04.871 [2024-12-15 10:02:53.652757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:04.871 [2024-12-15 10:02:53.652917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.652998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:04.872 [2024-12-15 10:02:53.653527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:04.872 [2024-12-15 10:02:53.653535] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 439e8229-b4b3-4098-a525-37f17a62794a 00:25:04.872 [2024-12-15 10:02:53.653543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:25:04.872 [2024-12-15 10:02:53.653555] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156864 00:25:04.872 [2024-12-15 10:02:53.653562] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154880 00:25:04.872 [2024-12-15 10:02:53.653571] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:25:04.872 [2024-12-15 10:02:53.653578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:04.872 [2024-12-15 10:02:53.653586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:04.872 [2024-12-15 10:02:53.653594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:04.872 [2024-12-15 10:02:53.653600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:04.872 [2024-12-15 10:02:53.653613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:04.872 [2024-12-15 10:02:53.653620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.872 [2024-12-15 10:02:53.653628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:04.872 [2024-12-15 10:02:53.653636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:25:04.872 [2024-12-15 10:02:53.653643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.872 [2024-12-15 10:02:53.667182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.872 [2024-12-15 10:02:53.667381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:04.872 [2024-12-15 10:02:53.667400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.505 ms 00:25:04.873 [2024-12-15 10:02:53.667408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.667639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.873 [2024-12-15 10:02:53.667648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:04.873 [2024-12-15 10:02:53.667657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:04.873 [2024-12-15 10:02:53.667670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.706824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.707003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.873 [2024-12-15 10:02:53.707024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.707033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.707095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.707103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.873 [2024-12-15 10:02:53.707112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.707125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.707202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.707213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.873 [2024-12-15 10:02:53.707221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.707229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.707245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.707277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.873 [2024-12-15 10:02:53.707285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.707292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.788192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.788246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.873 [2024-12-15 10:02:53.788274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.788283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.820761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.820806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.873 [2024-12-15 10:02:53.820817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.820825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.820897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.820907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.873 [2024-12-15 10:02:53.820916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.820924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.820965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.820974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.873 [2024-12-15 10:02:53.820983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.820992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.821089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.821103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.873 [2024-12-15 10:02:53.821112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.821120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.821150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.821159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:04.873 [2024-12-15 10:02:53.821168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.821176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.821215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.821227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.873 [2024-12-15 10:02:53.821236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.821244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.821324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.873 [2024-12-15 10:02:53.821336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.873 [2024-12-15 10:02:53.821345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.873 [2024-12-15 10:02:53.821353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.873 [2024-12-15 10:02:53.821501] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.192 ms, result 0 00:25:05.817 00:25:05.817 00:25:05.817 10:02:54 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:08.363 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:08.363 10:02:56 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:08.363 [2024-12-15 10:02:56.874104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:08.363 [2024-12-15 10:02:56.874198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77838 ] 00:25:08.363 [2024-12-15 10:02:57.018677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.363 [2024-12-15 10:02:57.225570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.625 [2024-12-15 10:02:57.513342] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.625 [2024-12-15 10:02:57.513422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.887 [2024-12-15 10:02:57.669971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.670032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.887 [2024-12-15 10:02:57.670046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:08.887 [2024-12-15 10:02:57.670058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.670114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.670126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.887 [2024-12-15 10:02:57.670135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:08.887 [2024-12-15 10:02:57.670143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.670164] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.887 [2024-12-15 10:02:57.670947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.887 [2024-12-15 10:02:57.670975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.670985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.887 [2024-12-15 10:02:57.670995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:25:08.887 [2024-12-15 10:02:57.671003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.672789] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.887 [2024-12-15 10:02:57.687415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.687462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.887 [2024-12-15 10:02:57.687475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:25:08.887 [2024-12-15 10:02:57.687484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.687559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.687568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.887 [2024-12-15 10:02:57.687578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:08.887 [2024-12-15 10:02:57.687585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.695634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.695678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.887 [2024-12-15 10:02:57.695688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.972 ms 00:25:08.887 [2024-12-15 10:02:57.695696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.695797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.695807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.887 [2024-12-15 10:02:57.695816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:08.887 [2024-12-15 10:02:57.695824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.695868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.695878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.887 [2024-12-15 10:02:57.695886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:08.887 [2024-12-15 10:02:57.695894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.695925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.887 [2024-12-15 10:02:57.700125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.700162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.887 [2024-12-15 10:02:57.700172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:25:08.887 [2024-12-15 10:02:57.700180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.700220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.887 [2024-12-15 10:02:57.700228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.887 [2024-12-15 10:02:57.700236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:08.887 [2024-12-15 10:02:57.700247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.887 [2024-12-15 10:02:57.700315] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.887 [2024-12-15 10:02:57.700338] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:08.887 [2024-12-15 10:02:57.700374] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.887 [2024-12-15 10:02:57.700390] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:08.887 [2024-12-15 10:02:57.700465] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:08.887 [2024-12-15 10:02:57.700477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.887 [2024-12-15 10:02:57.700491] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:08.887 [2024-12-15 10:02:57.700503] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700512] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700521] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:08.888 [2024-12-15 10:02:57.700528] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.888 [2024-12-15 10:02:57.700537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:08.888 [2024-12-15 10:02:57.700546] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:08.888 [2024-12-15 10:02:57.700554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.700561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.888 [2024-12-15 10:02:57.700569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:25:08.888 [2024-12-15 10:02:57.700576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.700655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.700664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.888 [2024-12-15 10:02:57.700672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:08.888 [2024-12-15 10:02:57.700679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.700750] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.888 [2024-12-15 10:02:57.700760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.888 [2024-12-15 10:02:57.700768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.888 [2024-12-15 10:02:57.700790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.888 [2024-12-15 10:02:57.700812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.888 [2024-12-15 10:02:57.700825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.888 [2024-12-15 10:02:57.700835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:08.888 [2024-12-15 10:02:57.700843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.888 [2024-12-15 10:02:57.700850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.888 [2024-12-15 10:02:57.700857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:08.888 [2024-12-15 10:02:57.700864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.888 [2024-12-15 10:02:57.700885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:08.888 [2024-12-15 10:02:57.700892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:08.888 [2024-12-15 10:02:57.700907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:08.888 [2024-12-15 10:02:57.700914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.888 [2024-12-15 10:02:57.700928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.888 [2024-12-15 10:02:57.700948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.888 [2024-12-15 10:02:57.700968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:08.888 [2024-12-15 10:02:57.700982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.888 [2024-12-15 10:02:57.700989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:08.888 [2024-12-15 10:02:57.700995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:08.888 [2024-12-15 10:02:57.701002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.888 [2024-12-15 10:02:57.701008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:08.888 [2024-12-15 10:02:57.701015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.888 [2024-12-15 10:02:57.701022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.888 [2024-12-15 10:02:57.701029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:08.888 [2024-12-15 10:02:57.701035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.888 [2024-12-15 10:02:57.701041] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.888 [2024-12-15 10:02:57.701051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.888 [2024-12-15 10:02:57.701059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.888 [2024-12-15 10:02:57.701067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.888 [2024-12-15 10:02:57.701075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.888 [2024-12-15 10:02:57.701083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.888 [2024-12-15 10:02:57.701089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.888 [2024-12-15 10:02:57.701096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.888 [2024-12-15 10:02:57.701103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.888 [2024-12-15 10:02:57.701110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.888 [2024-12-15 10:02:57.701117] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.888 [2024-12-15 10:02:57.701127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.888 [2024-12-15 10:02:57.701137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:08.888 [2024-12-15 10:02:57.701144] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:08.888 [2024-12-15 10:02:57.701152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:08.888 [2024-12-15 10:02:57.701159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:08.888 [2024-12-15 10:02:57.701166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:08.888 [2024-12-15 10:02:57.701173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:08.888 [2024-12-15 10:02:57.701181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:08.888 [2024-12-15 10:02:57.701189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:08.888 [2024-12-15 10:02:57.701196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:08.888 [2024-12-15 10:02:57.701203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:08.888 [2024-12-15 10:02:57.701210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:08.888 [2024-12-15 10:02:57.701217] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:08.888 [2024-12-15 10:02:57.701225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:08.888 [2024-12-15 10:02:57.701232] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.888 [2024-12-15 10:02:57.701241] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.888 [2024-12-15 10:02:57.701249] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.888 [2024-12-15 10:02:57.701272] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.888 [2024-12-15 10:02:57.701280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.888 [2024-12-15 10:02:57.701288] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.888 [2024-12-15 10:02:57.701295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.701304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.888 [2024-12-15 10:02:57.701311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:25:08.888 [2024-12-15 10:02:57.701319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.719541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.719592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.888 [2024-12-15 10:02:57.719604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:25:08.888 [2024-12-15 10:02:57.719619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.719710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.719720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.888 [2024-12-15 10:02:57.719730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:08.888 [2024-12-15 10:02:57.719739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.762873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.762927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.888 [2024-12-15 10:02:57.762940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.080 ms 00:25:08.888 [2024-12-15 10:02:57.762948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-12-15 10:02:57.762997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-12-15 10:02:57.763007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.888 [2024-12-15 10:02:57.763017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:08.889 [2024-12-15 10:02:57.763025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.763644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.763676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.889 [2024-12-15 10:02:57.763688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:25:08.889 [2024-12-15 10:02:57.763702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.763833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.763851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.889 [2024-12-15 10:02:57.763860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:08.889 [2024-12-15 10:02:57.763868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.781199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.781241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.889 [2024-12-15 10:02:57.781280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.307 ms 00:25:08.889 [2024-12-15 10:02:57.781290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.795463] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:08.889 [2024-12-15 10:02:57.795508] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:08.889 [2024-12-15 10:02:57.795521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.795529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:08.889 [2024-12-15 10:02:57.795539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:25:08.889 [2024-12-15 10:02:57.795547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.821765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.821813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:08.889 [2024-12-15 10:02:57.821826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.166 ms 00:25:08.889 [2024-12-15 10:02:57.821834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.834689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.834732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:08.889 [2024-12-15 10:02:57.834744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.801 ms 00:25:08.889 [2024-12-15 10:02:57.834752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.847383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.847434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:08.889 [2024-12-15 10:02:57.847445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.587 ms 00:25:08.889 [2024-12-15 10:02:57.847452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-12-15 10:02:57.847838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-12-15 10:02:57.847852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:08.889 [2024-12-15 10:02:57.847862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:25:08.889 [2024-12-15 10:02:57.847870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.150 [2024-12-15 10:02:57.914187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.150 [2024-12-15 10:02:57.914242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.150 [2024-12-15 10:02:57.914279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.298 ms 00:25:09.150 [2024-12-15 10:02:57.914288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.150 [2024-12-15 10:02:57.925722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.150 [2024-12-15 10:02:57.928035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.150 [2024-12-15 10:02:57.928063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.151 [2024-12-15 10:02:57.928074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.688 ms 00:25:09.151 [2024-12-15 10:02:57.928087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.928150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.928160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.151 [2024-12-15 10:02:57.928168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.151 [2024-12-15 10:02:57.928176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.928779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.928809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.151 [2024-12-15 10:02:57.928818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:25:09.151 [2024-12-15 10:02:57.928826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.930003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.930029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:09.151 [2024-12-15 10:02:57.930038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:25:09.151 [2024-12-15 10:02:57.930045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.930071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.930079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.151 [2024-12-15 10:02:57.930092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.151 [2024-12-15 10:02:57.930099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.930130] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.151 [2024-12-15 10:02:57.930139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.930149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.151 [2024-12-15 10:02:57.930156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.151 [2024-12-15 10:02:57.930164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.954309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.954339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.151 [2024-12-15 10:02:57.954350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.128 ms 00:25:09.151 [2024-12-15 10:02:57.954357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.954424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.151 [2024-12-15 10:02:57.954434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.151 [2024-12-15 10:02:57.954442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:09.151 [2024-12-15 10:02:57.954449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.151 [2024-12-15 10:02:57.955783] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.360 ms, result 0 00:25:10.536  [2024-12-15T10:03:00.496Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-15T10:03:01.455Z] Copying: 35/1024 [MB] (17 MBps) [2024-12-15T10:03:02.456Z] Copying: 50/1024 [MB] (14 MBps) [2024-12-15T10:03:03.401Z] Copying: 62/1024 [MB] (11 MBps) [2024-12-15T10:03:04.346Z] Copying: 73/1024 [MB] (11 MBps) [2024-12-15T10:03:05.293Z] Copying: 84/1024 [MB] (10 MBps) [2024-12-15T10:03:06.234Z] Copying: 94/1024 [MB] (10 MBps) [2024-12-15T10:03:07.177Z] Copying: 114/1024 [MB] (20 MBps) [2024-12-15T10:03:08.565Z] Copying: 128/1024 [MB] (14 MBps) [2024-12-15T10:03:09.140Z] Copying: 139/1024 [MB] (10 MBps) [2024-12-15T10:03:10.531Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-15T10:03:11.475Z] Copying: 160/1024 [MB] (10 MBps) [2024-12-15T10:03:12.420Z] Copying: 170/1024 [MB] (10 MBps) [2024-12-15T10:03:13.367Z] Copying: 191/1024 [MB] (20 MBps) [2024-12-15T10:03:14.312Z] Copying: 213/1024 [MB] (22 MBps) [2024-12-15T10:03:15.256Z] Copying: 233/1024 [MB] (19 MBps) [2024-12-15T10:03:16.196Z] Copying: 252/1024 [MB] (19 MBps) [2024-12-15T10:03:17.141Z] Copying: 274/1024 [MB] (22 MBps) [2024-12-15T10:03:18.527Z] Copying: 292/1024 [MB] (18 MBps) [2024-12-15T10:03:19.471Z] Copying: 318/1024 [MB] (25 MBps) [2024-12-15T10:03:20.413Z] Copying: 337/1024 [MB] (19 MBps) [2024-12-15T10:03:21.356Z] Copying: 356/1024 [MB] (18 MBps) [2024-12-15T10:03:22.302Z] Copying: 380/1024 [MB] (24 MBps) [2024-12-15T10:03:23.248Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-15T10:03:24.192Z] Copying: 411/1024 [MB] (20 MBps) [2024-12-15T10:03:25.136Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-15T10:03:26.524Z] Copying: 444/1024 [MB] (20 MBps) [2024-12-15T10:03:27.466Z] Copying: 464/1024 [MB] (20 MBps) [2024-12-15T10:03:28.407Z] Copying: 494/1024 [MB] (29 MBps) [2024-12-15T10:03:29.346Z] Copying: 525/1024 [MB] (31 MBps) [2024-12-15T10:03:30.291Z] Copying: 548/1024 [MB] (22 MBps) [2024-12-15T10:03:31.297Z] Copying: 566/1024 [MB] (18 MBps) [2024-12-15T10:03:32.241Z] Copying: 583/1024 [MB] (17 MBps) [2024-12-15T10:03:33.185Z] Copying: 599/1024 [MB] (15 MBps) [2024-12-15T10:03:34.130Z] Copying: 609/1024 [MB] (10 MBps) [2024-12-15T10:03:35.529Z] Copying: 622/1024 [MB] (12 MBps) [2024-12-15T10:03:36.475Z] Copying: 633/1024 [MB] (10 MBps) [2024-12-15T10:03:37.421Z] Copying: 644/1024 [MB] (11 MBps) [2024-12-15T10:03:38.367Z] Copying: 654/1024 [MB] (10 MBps) [2024-12-15T10:03:39.313Z] Copying: 664/1024 [MB] (10 MBps) [2024-12-15T10:03:40.256Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-15T10:03:41.200Z] Copying: 689/1024 [MB] (13 MBps) [2024-12-15T10:03:42.145Z] Copying: 701/1024 [MB] (11 MBps) [2024-12-15T10:03:43.531Z] Copying: 712/1024 [MB] (11 MBps) [2024-12-15T10:03:44.475Z] Copying: 724/1024 [MB] (11 MBps) [2024-12-15T10:03:45.418Z] Copying: 740/1024 [MB] (16 MBps) [2024-12-15T10:03:46.363Z] Copying: 760/1024 [MB] (20 MBps) [2024-12-15T10:03:47.307Z] Copying: 771/1024 [MB] (10 MBps) [2024-12-15T10:03:48.253Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-15T10:03:49.197Z] Copying: 799/1024 [MB] (17 MBps) [2024-12-15T10:03:50.142Z] Copying: 810/1024 [MB] (11 MBps) [2024-12-15T10:03:51.530Z] Copying: 824/1024 [MB] (13 MBps) [2024-12-15T10:03:52.474Z] Copying: 835/1024 [MB] (11 MBps) [2024-12-15T10:03:53.419Z] Copying: 847/1024 [MB] (12 MBps) [2024-12-15T10:03:54.362Z] Copying: 867/1024 [MB] (19 MBps) [2024-12-15T10:03:55.349Z] Copying: 878/1024 [MB] (10 MBps) [2024-12-15T10:03:56.315Z] Copying: 895/1024 [MB] (17 MBps) [2024-12-15T10:03:57.260Z] Copying: 912/1024 [MB] (16 MBps) [2024-12-15T10:03:58.205Z] Copying: 927/1024 [MB] (15 MBps) [2024-12-15T10:03:59.149Z] Copying: 938/1024 [MB] (11 MBps) [2024-12-15T10:04:00.534Z] Copying: 949/1024 [MB] (10 MBps) [2024-12-15T10:04:01.478Z] Copying: 960/1024 [MB] (11 MBps) [2024-12-15T10:04:02.419Z] Copying: 978/1024 [MB] (17 MBps) [2024-12-15T10:04:03.364Z] Copying: 998/1024 [MB] (19 MBps) [2024-12-15T10:04:04.310Z] Copying: 1012/1024 [MB] (13 MBps) [2024-12-15T10:04:04.310Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-15 10:04:04.021553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.021631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:15.294 [2024-12-15 10:04:04.021647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:15.294 [2024-12-15 10:04:04.021656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.021680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:15.294 [2024-12-15 10:04:04.024651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.024890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:15.294 [2024-12-15 10:04:04.024914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:26:15.294 [2024-12-15 10:04:04.024923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.025290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.025310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:15.294 [2024-12-15 10:04:04.025324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:26:15.294 [2024-12-15 10:04:04.025336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.032859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.032899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:15.294 [2024-12-15 10:04:04.032921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.501 ms 00:26:15.294 [2024-12-15 10:04:04.032933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.040950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.041121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:15.294 [2024-12-15 10:04:04.041140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.968 ms 00:26:15.294 [2024-12-15 10:04:04.041149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.068526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.068574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:15.294 [2024-12-15 10:04:04.068587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.300 ms 00:26:15.294 [2024-12-15 10:04:04.068594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.084581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.084626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:15.294 [2024-12-15 10:04:04.084639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.941 ms 00:26:15.294 [2024-12-15 10:04:04.084656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.093077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.093120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:15.294 [2024-12-15 10:04:04.093131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.352 ms 00:26:15.294 [2024-12-15 10:04:04.093139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.118924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.118966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:15.294 [2024-12-15 10:04:04.118977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.769 ms 00:26:15.294 [2024-12-15 10:04:04.118985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.144423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.144466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:15.294 [2024-12-15 10:04:04.144492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.395 ms 00:26:15.294 [2024-12-15 10:04:04.144499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.169722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.169767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:15.294 [2024-12-15 10:04:04.169778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.179 ms 00:26:15.294 [2024-12-15 10:04:04.169784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.194647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.294 [2024-12-15 10:04:04.194691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:15.294 [2024-12-15 10:04:04.194703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.779 ms 00:26:15.294 [2024-12-15 10:04:04.194710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.294 [2024-12-15 10:04:04.194751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:15.294 [2024-12-15 10:04:04.194773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:15.294 [2024-12-15 10:04:04.194783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:26:15.294 [2024-12-15 10:04:04.194791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.294 [2024-12-15 10:04:04.194845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.194999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:15.295 [2024-12-15 10:04:04.195501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:15.296 [2024-12-15 10:04:04.195574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:15.296 [2024-12-15 10:04:04.195583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 439e8229-b4b3-4098-a525-37f17a62794a 00:26:15.296 [2024-12-15 10:04:04.195591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:26:15.296 [2024-12-15 10:04:04.195598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:15.296 [2024-12-15 10:04:04.195606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:15.296 [2024-12-15 10:04:04.195613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:15.296 [2024-12-15 10:04:04.195621] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:15.296 [2024-12-15 10:04:04.195629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:15.296 [2024-12-15 10:04:04.195637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:15.296 [2024-12-15 10:04:04.195651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:15.296 [2024-12-15 10:04:04.195658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:15.296 [2024-12-15 10:04:04.195665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.296 [2024-12-15 10:04:04.195673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:15.296 [2024-12-15 10:04:04.195684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:26:15.296 [2024-12-15 10:04:04.195692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.208886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.296 [2024-12-15 10:04:04.208929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.296 [2024-12-15 10:04:04.208940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.162 ms 00:26:15.296 [2024-12-15 10:04:04.208948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.209181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.296 [2024-12-15 10:04:04.209191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.296 [2024-12-15 10:04:04.209200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:26:15.296 [2024-12-15 10:04:04.209207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.248102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.296 [2024-12-15 10:04:04.248151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.296 [2024-12-15 10:04:04.248162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.296 [2024-12-15 10:04:04.248171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.248241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.296 [2024-12-15 10:04:04.248250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.296 [2024-12-15 10:04:04.248278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.296 [2024-12-15 10:04:04.248286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.248364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.296 [2024-12-15 10:04:04.248374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.296 [2024-12-15 10:04:04.248383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.296 [2024-12-15 10:04:04.248412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.296 [2024-12-15 10:04:04.248429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.296 [2024-12-15 10:04:04.248442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.296 [2024-12-15 10:04:04.248450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.296 [2024-12-15 10:04:04.248458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.328200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.328446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.558 [2024-12-15 10:04:04.328470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.328479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.558 [2024-12-15 10:04:04.361177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:15.558 [2024-12-15 10:04:04.361288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:15.558 [2024-12-15 10:04:04.361364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:15.558 [2024-12-15 10:04:04.361494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:15.558 [2024-12-15 10:04:04.361551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:15.558 [2024-12-15 10:04:04.361619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.558 [2024-12-15 10:04:04.361683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:15.558 [2024-12-15 10:04:04.361695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.558 [2024-12-15 10:04:04.361703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-12-15 10:04:04.361834] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.247 ms, result 0 00:26:16.502 00:26:16.502 00:26:16.502 10:04:05 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:19.047 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:19.047 Process with pid 75992 is not found 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75992 00:26:19.047 10:04:07 -- common/autotest_common.sh@936 -- # '[' -z 75992 ']' 00:26:19.047 10:04:07 -- common/autotest_common.sh@940 -- # kill -0 75992 00:26:19.047 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75992) - No such process 00:26:19.047 10:04:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75992 is not found' 00:26:19.047 10:04:07 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:19.047 10:04:08 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:19.047 Remove shared memory files 00:26:19.047 10:04:08 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:19.047 10:04:08 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:19.047 10:04:08 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:19.047 10:04:08 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:19.047 10:04:08 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:19.047 10:04:08 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:19.047 ************************************ 00:26:19.047 END TEST ftl_dirty_shutdown 00:26:19.047 ************************************ 00:26:19.047 00:26:19.047 real 4m2.369s 00:26:19.047 user 4m14.976s 00:26:19.047 sys 0m24.531s 00:26:19.047 10:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.047 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:26:19.309 10:04:08 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:19.309 10:04:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.309 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:26:19.309 ************************************ 00:26:19.309 START TEST ftl_upgrade_shutdown 00:26:19.309 ************************************ 00:26:19.309 10:04:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:19.309 * Looking for test storage... 00:26:19.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.309 10:04:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:19.309 10:04:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:19.309 10:04:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:19.309 10:04:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:19.309 10:04:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:19.309 10:04:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:19.309 10:04:08 -- scripts/common.sh@335 -- # IFS=.-: 00:26:19.309 10:04:08 -- scripts/common.sh@335 -- # read -ra ver1 00:26:19.309 10:04:08 -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.309 10:04:08 -- scripts/common.sh@336 -- # read -ra ver2 00:26:19.309 10:04:08 -- scripts/common.sh@337 -- # local 'op=<' 00:26:19.309 10:04:08 -- scripts/common.sh@339 -- # ver1_l=2 00:26:19.309 10:04:08 -- scripts/common.sh@340 -- # ver2_l=1 00:26:19.309 10:04:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:19.309 10:04:08 -- scripts/common.sh@343 -- # case "$op" in 00:26:19.309 10:04:08 -- scripts/common.sh@344 -- # : 1 00:26:19.309 10:04:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:19.309 10:04:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.309 10:04:08 -- scripts/common.sh@364 -- # decimal 1 00:26:19.309 10:04:08 -- scripts/common.sh@352 -- # local d=1 00:26:19.309 10:04:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.309 10:04:08 -- scripts/common.sh@354 -- # echo 1 00:26:19.309 10:04:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:19.309 10:04:08 -- scripts/common.sh@365 -- # decimal 2 00:26:19.309 10:04:08 -- scripts/common.sh@352 -- # local d=2 00:26:19.309 10:04:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.309 10:04:08 -- scripts/common.sh@354 -- # echo 2 00:26:19.309 10:04:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:19.309 10:04:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:19.309 10:04:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:19.309 10:04:08 -- scripts/common.sh@367 -- # return 0 00:26:19.309 10:04:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:19.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.309 --rc genhtml_branch_coverage=1 00:26:19.309 --rc genhtml_function_coverage=1 00:26:19.309 --rc genhtml_legend=1 00:26:19.309 --rc geninfo_all_blocks=1 00:26:19.309 --rc geninfo_unexecuted_blocks=1 00:26:19.309 00:26:19.309 ' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:19.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.309 --rc genhtml_branch_coverage=1 00:26:19.309 --rc genhtml_function_coverage=1 00:26:19.309 --rc genhtml_legend=1 00:26:19.309 --rc geninfo_all_blocks=1 00:26:19.309 --rc geninfo_unexecuted_blocks=1 00:26:19.309 00:26:19.309 ' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:19.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.309 --rc genhtml_branch_coverage=1 00:26:19.309 --rc genhtml_function_coverage=1 00:26:19.309 --rc genhtml_legend=1 00:26:19.309 --rc geninfo_all_blocks=1 00:26:19.309 --rc geninfo_unexecuted_blocks=1 00:26:19.309 00:26:19.309 ' 00:26:19.309 10:04:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:19.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.310 --rc genhtml_branch_coverage=1 00:26:19.310 --rc genhtml_function_coverage=1 00:26:19.310 --rc genhtml_legend=1 00:26:19.310 --rc geninfo_all_blocks=1 00:26:19.310 --rc geninfo_unexecuted_blocks=1 00:26:19.310 00:26:19.310 ' 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:19.310 10:04:08 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:19.310 10:04:08 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.310 10:04:08 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.310 10:04:08 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:19.310 10:04:08 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:19.310 10:04:08 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.310 10:04:08 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:19.310 10:04:08 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:19.310 10:04:08 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.310 10:04:08 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.310 10:04:08 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:19.310 10:04:08 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:19.310 10:04:08 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.310 10:04:08 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.310 10:04:08 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:19.310 10:04:08 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:19.310 10:04:08 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.310 10:04:08 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.310 10:04:08 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:19.310 10:04:08 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:19.310 10:04:08 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.310 10:04:08 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.310 10:04:08 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.310 10:04:08 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.310 10:04:08 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:19.310 10:04:08 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:19.310 10:04:08 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.310 10:04:08 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:19.310 10:04:08 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:19.310 10:04:08 -- ftl/common.sh@81 -- # local base_bdev= 00:26:19.310 10:04:08 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:19.310 10:04:08 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:19.310 10:04:08 -- ftl/common.sh@89 -- # spdk_tgt_pid=78625 00:26:19.310 10:04:08 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:19.310 10:04:08 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:19.310 10:04:08 -- ftl/common.sh@91 -- # waitforlisten 78625 00:26:19.310 10:04:08 -- common/autotest_common.sh@829 -- # '[' -z 78625 ']' 00:26:19.310 10:04:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.310 10:04:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.310 10:04:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.310 10:04:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.310 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:26:19.571 [2024-12-15 10:04:08.341750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:19.571 [2024-12-15 10:04:08.342103] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78625 ] 00:26:19.571 [2024-12-15 10:04:08.493752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.832 [2024-12-15 10:04:08.714864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:19.832 [2024-12-15 10:04:08.715316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.218 10:04:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.218 10:04:09 -- common/autotest_common.sh@862 -- # return 0 00:26:21.219 10:04:09 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:21.219 10:04:09 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:21.219 10:04:09 -- ftl/common.sh@99 -- # local params 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:21.219 10:04:09 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:21.219 10:04:09 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:21.219 10:04:09 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:21.219 10:04:09 -- ftl/common.sh@54 -- # local name=base 00:26:21.219 10:04:09 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:21.219 10:04:09 -- ftl/common.sh@56 -- # local size=20480 00:26:21.219 10:04:09 -- ftl/common.sh@59 -- # local base_bdev 00:26:21.219 10:04:09 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:21.219 10:04:10 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:21.219 10:04:10 -- ftl/common.sh@62 -- # local base_size 00:26:21.219 10:04:10 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:21.219 10:04:10 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:21.219 10:04:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:21.219 10:04:10 -- common/autotest_common.sh@1369 -- # local bs 00:26:21.219 10:04:10 -- common/autotest_common.sh@1370 -- # local nb 00:26:21.219 10:04:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:21.480 10:04:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:21.480 { 00:26:21.480 "name": "basen1", 00:26:21.480 "aliases": [ 00:26:21.480 "c40d79a3-3139-4105-bee1-733af40204d2" 00:26:21.480 ], 00:26:21.480 "product_name": "NVMe disk", 00:26:21.480 "block_size": 4096, 00:26:21.480 "num_blocks": 1310720, 00:26:21.480 "uuid": "c40d79a3-3139-4105-bee1-733af40204d2", 00:26:21.480 "assigned_rate_limits": { 00:26:21.480 "rw_ios_per_sec": 0, 00:26:21.480 "rw_mbytes_per_sec": 0, 00:26:21.480 "r_mbytes_per_sec": 0, 00:26:21.480 "w_mbytes_per_sec": 0 00:26:21.480 }, 00:26:21.480 "claimed": true, 00:26:21.480 "claim_type": "read_many_write_one", 00:26:21.480 "zoned": false, 00:26:21.480 "supported_io_types": { 00:26:21.480 "read": true, 00:26:21.480 "write": true, 00:26:21.480 "unmap": true, 00:26:21.480 "write_zeroes": true, 00:26:21.480 "flush": true, 00:26:21.480 "reset": true, 00:26:21.480 "compare": true, 00:26:21.480 "compare_and_write": false, 00:26:21.480 "abort": true, 00:26:21.480 "nvme_admin": true, 00:26:21.480 "nvme_io": true 00:26:21.480 }, 00:26:21.480 "driver_specific": { 00:26:21.480 "nvme": [ 00:26:21.480 { 00:26:21.481 "pci_address": "0000:00:07.0", 00:26:21.481 "trid": { 00:26:21.481 "trtype": "PCIe", 00:26:21.481 "traddr": "0000:00:07.0" 00:26:21.481 }, 00:26:21.481 "ctrlr_data": { 00:26:21.481 "cntlid": 0, 00:26:21.481 "vendor_id": "0x1b36", 00:26:21.481 "model_number": "QEMU NVMe Ctrl", 00:26:21.481 "serial_number": "12341", 00:26:21.481 "firmware_revision": "8.0.0", 00:26:21.481 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:21.481 "oacs": { 00:26:21.481 "security": 0, 00:26:21.481 "format": 1, 00:26:21.481 "firmware": 0, 00:26:21.481 "ns_manage": 1 00:26:21.481 }, 00:26:21.481 "multi_ctrlr": false, 00:26:21.481 "ana_reporting": false 00:26:21.481 }, 00:26:21.481 "vs": { 00:26:21.481 "nvme_version": "1.4" 00:26:21.481 }, 00:26:21.481 "ns_data": { 00:26:21.481 "id": 1, 00:26:21.481 "can_share": false 00:26:21.481 } 00:26:21.481 } 00:26:21.481 ], 00:26:21.481 "mp_policy": "active_passive" 00:26:21.481 } 00:26:21.481 } 00:26:21.481 ]' 00:26:21.481 10:04:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:21.481 10:04:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:21.481 10:04:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:21.481 10:04:10 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:21.481 10:04:10 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:21.481 10:04:10 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:21.481 10:04:10 -- ftl/common.sh@63 -- # base_size=5120 00:26:21.481 10:04:10 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:21.481 10:04:10 -- ftl/common.sh@67 -- # clear_lvols 00:26:21.481 10:04:10 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:21.481 10:04:10 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:21.742 10:04:10 -- ftl/common.sh@28 -- # stores=369b6992-0167-445b-915d-f3006b2fd532 00:26:21.742 10:04:10 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:21.742 10:04:10 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 369b6992-0167-445b-915d-f3006b2fd532 00:26:22.003 10:04:10 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:22.264 10:04:11 -- ftl/common.sh@68 -- # lvs=c5a9dd39-b0fd-4779-a74b-a1f9643ccca7 00:26:22.264 10:04:11 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c5a9dd39-b0fd-4779-a74b-a1f9643ccca7 00:26:22.264 10:04:11 -- ftl/common.sh@107 -- # base_bdev=a8d1e140-a9d7-4308-8a98-76dbd82a70bb 00:26:22.264 10:04:11 -- ftl/common.sh@108 -- # [[ -z a8d1e140-a9d7-4308-8a98-76dbd82a70bb ]] 00:26:22.264 10:04:11 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 a8d1e140-a9d7-4308-8a98-76dbd82a70bb 5120 00:26:22.264 10:04:11 -- ftl/common.sh@35 -- # local name=cache 00:26:22.264 10:04:11 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:22.264 10:04:11 -- ftl/common.sh@37 -- # local base_bdev=a8d1e140-a9d7-4308-8a98-76dbd82a70bb 00:26:22.264 10:04:11 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:22.525 10:04:11 -- ftl/common.sh@41 -- # get_bdev_size a8d1e140-a9d7-4308-8a98-76dbd82a70bb 00:26:22.525 10:04:11 -- common/autotest_common.sh@1367 -- # local bdev_name=a8d1e140-a9d7-4308-8a98-76dbd82a70bb 00:26:22.525 10:04:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:22.525 10:04:11 -- common/autotest_common.sh@1369 -- # local bs 00:26:22.525 10:04:11 -- common/autotest_common.sh@1370 -- # local nb 00:26:22.525 10:04:11 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8d1e140-a9d7-4308-8a98-76dbd82a70bb 00:26:22.525 10:04:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:22.525 { 00:26:22.525 "name": "a8d1e140-a9d7-4308-8a98-76dbd82a70bb", 00:26:22.525 "aliases": [ 00:26:22.525 "lvs/basen1p0" 00:26:22.525 ], 00:26:22.525 "product_name": "Logical Volume", 00:26:22.525 "block_size": 4096, 00:26:22.525 "num_blocks": 5242880, 00:26:22.525 "uuid": "a8d1e140-a9d7-4308-8a98-76dbd82a70bb", 00:26:22.525 "assigned_rate_limits": { 00:26:22.525 "rw_ios_per_sec": 0, 00:26:22.525 "rw_mbytes_per_sec": 0, 00:26:22.525 "r_mbytes_per_sec": 0, 00:26:22.525 "w_mbytes_per_sec": 0 00:26:22.525 }, 00:26:22.525 "claimed": false, 00:26:22.525 "zoned": false, 00:26:22.525 "supported_io_types": { 00:26:22.525 "read": true, 00:26:22.525 "write": true, 00:26:22.525 "unmap": true, 00:26:22.525 "write_zeroes": true, 00:26:22.525 "flush": false, 00:26:22.525 "reset": true, 00:26:22.525 "compare": false, 00:26:22.525 "compare_and_write": false, 00:26:22.525 "abort": false, 00:26:22.525 "nvme_admin": false, 00:26:22.525 "nvme_io": false 00:26:22.525 }, 00:26:22.525 "driver_specific": { 00:26:22.525 "lvol": { 00:26:22.525 "lvol_store_uuid": "c5a9dd39-b0fd-4779-a74b-a1f9643ccca7", 00:26:22.525 "base_bdev": "basen1", 00:26:22.525 "thin_provision": true, 00:26:22.525 "snapshot": false, 00:26:22.525 "clone": false, 00:26:22.525 "esnap_clone": false 00:26:22.525 } 00:26:22.526 } 00:26:22.526 } 00:26:22.526 ]' 00:26:22.526 10:04:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:22.526 10:04:11 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:22.526 10:04:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:22.785 10:04:11 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:22.785 10:04:11 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:22.785 10:04:11 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:22.785 10:04:11 -- ftl/common.sh@41 -- # local base_size=1024 00:26:22.785 10:04:11 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:22.785 10:04:11 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:23.043 10:04:11 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:23.043 10:04:11 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:23.043 10:04:11 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:23.043 10:04:12 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:23.043 10:04:12 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:23.043 10:04:12 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a8d1e140-a9d7-4308-8a98-76dbd82a70bb -c cachen1p0 --l2p_dram_limit 2 00:26:23.303 [2024-12-15 10:04:12.176423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.176558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:23.303 [2024-12-15 10:04:12.176577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:23.303 [2024-12-15 10:04:12.176586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.176635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.176642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:23.303 [2024-12-15 10:04:12.176650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:23.303 [2024-12-15 10:04:12.176656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.176673] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:23.303 [2024-12-15 10:04:12.177277] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:23.303 [2024-12-15 10:04:12.177293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.177300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:23.303 [2024-12-15 10:04:12.177309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:26:23.303 [2024-12-15 10:04:12.177315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.177368] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c19a5606-2d6f-46d2-8ee5-e9060cf83d01 00:26:23.303 [2024-12-15 10:04:12.178307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.178329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:23.303 [2024-12-15 10:04:12.178337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:23.303 [2024-12-15 10:04:12.178344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.182963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.182991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:23.303 [2024-12-15 10:04:12.182998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.585 ms 00:26:23.303 [2024-12-15 10:04:12.183005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.183034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.183042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:23.303 [2024-12-15 10:04:12.183048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:23.303 [2024-12-15 10:04:12.183057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.183087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.183098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:23.303 [2024-12-15 10:04:12.183104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:23.303 [2024-12-15 10:04:12.183111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.183129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:23.303 [2024-12-15 10:04:12.186094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.186119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:23.303 [2024-12-15 10:04:12.186128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.969 ms 00:26:23.303 [2024-12-15 10:04:12.186134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.186157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.186163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:23.303 [2024-12-15 10:04:12.186171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:23.303 [2024-12-15 10:04:12.186176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.186200] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:23.303 [2024-12-15 10:04:12.186295] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:23.303 [2024-12-15 10:04:12.186308] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:23.303 [2024-12-15 10:04:12.186316] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:23.303 [2024-12-15 10:04:12.186326] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:23.303 [2024-12-15 10:04:12.186332] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:23.303 [2024-12-15 10:04:12.186341] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:23.303 [2024-12-15 10:04:12.186347] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:23.303 [2024-12-15 10:04:12.186355] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:23.303 [2024-12-15 10:04:12.186360] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:23.303 [2024-12-15 10:04:12.186367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.186377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:23.303 [2024-12-15 10:04:12.186385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:26:23.303 [2024-12-15 10:04:12.186390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.186438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.303 [2024-12-15 10:04:12.186444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:23.303 [2024-12-15 10:04:12.186451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:23.303 [2024-12-15 10:04:12.186459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.303 [2024-12-15 10:04:12.186515] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:23.303 [2024-12-15 10:04:12.186521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:23.303 [2024-12-15 10:04:12.186529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:23.303 [2024-12-15 10:04:12.186534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:23.303 [2024-12-15 10:04:12.186547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:23.303 [2024-12-15 10:04:12.186559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:23.303 [2024-12-15 10:04:12.186566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:23.303 [2024-12-15 10:04:12.186571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:23.303 [2024-12-15 10:04:12.186585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:23.303 [2024-12-15 10:04:12.186592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:23.303 [2024-12-15 10:04:12.186604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:23.303 [2024-12-15 10:04:12.186621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:23.303 [2024-12-15 10:04:12.186627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.303 [2024-12-15 10:04:12.186633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:23.303 [2024-12-15 10:04:12.186639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:23.303 [2024-12-15 10:04:12.186644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:23.303 [2024-12-15 10:04:12.186650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:23.303 [2024-12-15 10:04:12.186655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:23.303 [2024-12-15 10:04:12.186661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:23.303 [2024-12-15 10:04:12.186666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:23.303 [2024-12-15 10:04:12.186672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:23.304 [2024-12-15 10:04:12.186677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:23.304 [2024-12-15 10:04:12.186687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:23.304 [2024-12-15 10:04:12.186693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:23.304 [2024-12-15 10:04:12.186707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:23.304 [2024-12-15 10:04:12.186711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:23.304 [2024-12-15 10:04:12.186722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:23.304 [2024-12-15 10:04:12.186728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.304 [2024-12-15 10:04:12.186733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:23.304 [2024-12-15 10:04:12.186740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.304 [2024-12-15 10:04:12.186750] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:23.304 [2024-12-15 10:04:12.186756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:23.304 [2024-12-15 10:04:12.186764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.304 [2024-12-15 10:04:12.186779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:23.304 [2024-12-15 10:04:12.186784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:23.304 [2024-12-15 10:04:12.186790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:23.304 [2024-12-15 10:04:12.186795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:23.304 [2024-12-15 10:04:12.186803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:23.304 [2024-12-15 10:04:12.186808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:23.304 [2024-12-15 10:04:12.186815] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:23.304 [2024-12-15 10:04:12.186823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:23.304 [2024-12-15 10:04:12.186836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:23.304 [2024-12-15 10:04:12.186855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:23.304 [2024-12-15 10:04:12.186860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:23.304 [2024-12-15 10:04:12.186867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:23.304 [2024-12-15 10:04:12.186872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:23.304 [2024-12-15 10:04:12.186906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:23.304 [2024-12-15 10:04:12.186911] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:23.304 [2024-12-15 10:04:12.186919] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.304 [2024-12-15 10:04:12.186932] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:23.304 [2024-12-15 10:04:12.186937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:23.304 [2024-12-15 10:04:12.186944] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:23.304 [2024-12-15 10:04:12.186949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.186956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:23.304 [2024-12-15 10:04:12.186962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:26:23.304 [2024-12-15 10:04:12.186969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.198821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.198919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:23.304 [2024-12-15 10:04:12.198960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.822 ms 00:26:23.304 [2024-12-15 10:04:12.198979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.199019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.199038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:23.304 [2024-12-15 10:04:12.199055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:23.304 [2024-12-15 10:04:12.199071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.222857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.222955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:23.304 [2024-12-15 10:04:12.222997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.743 ms 00:26:23.304 [2024-12-15 10:04:12.223018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.223053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.223070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:23.304 [2024-12-15 10:04:12.223085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:23.304 [2024-12-15 10:04:12.223100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.223424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.223459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:23.304 [2024-12-15 10:04:12.223475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:26:23.304 [2024-12-15 10:04:12.223490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.223535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.223554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:23.304 [2024-12-15 10:04:12.223616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:23.304 [2024-12-15 10:04:12.223636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.235536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.235622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:23.304 [2024-12-15 10:04:12.235659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.876 ms 00:26:23.304 [2024-12-15 10:04:12.235677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.244561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:23.304 [2024-12-15 10:04:12.245443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.245519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:23.304 [2024-12-15 10:04:12.245559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.600 ms 00:26:23.304 [2024-12-15 10:04:12.245577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.266220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.304 [2024-12-15 10:04:12.266322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:23.304 [2024-12-15 10:04:12.266365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.613 ms 00:26:23.304 [2024-12-15 10:04:12.266383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.304 [2024-12-15 10:04:12.266422] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:23.304 [2024-12-15 10:04:12.266451] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:27.509 [2024-12-15 10:04:16.154415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.154767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:27.509 [2024-12-15 10:04:16.154803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3887.963 ms 00:26:27.509 [2024-12-15 10:04:16.154814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.154938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.154951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:27.509 [2024-12-15 10:04:16.154966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:26:27.509 [2024-12-15 10:04:16.154976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.181232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.181298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:27.509 [2024-12-15 10:04:16.181317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.191 ms 00:26:27.509 [2024-12-15 10:04:16.181326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.207058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.207108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:27.509 [2024-12-15 10:04:16.207128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.670 ms 00:26:27.509 [2024-12-15 10:04:16.207135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.207532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.207546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:27.509 [2024-12-15 10:04:16.207558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:26:27.509 [2024-12-15 10:04:16.207566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.280333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.280384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:27.509 [2024-12-15 10:04:16.280402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 72.702 ms 00:26:27.509 [2024-12-15 10:04:16.280411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.308408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.308463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:27.509 [2024-12-15 10:04:16.308479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.960 ms 00:26:27.509 [2024-12-15 10:04:16.308488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.309967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.310015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:27.509 [2024-12-15 10:04:16.310032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.436 ms 00:26:27.509 [2024-12-15 10:04:16.310041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.336854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.336906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:27.509 [2024-12-15 10:04:16.336923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.759 ms 00:26:27.509 [2024-12-15 10:04:16.336931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.336971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.336980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:27.509 [2024-12-15 10:04:16.336992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:27.509 [2024-12-15 10:04:16.337000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.337099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.509 [2024-12-15 10:04:16.337112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:27.509 [2024-12-15 10:04:16.337125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:27.509 [2024-12-15 10:04:16.337133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.509 [2024-12-15 10:04:16.338357] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4161.385 ms, result 0 00:26:27.509 { 00:26:27.509 "name": "ftl", 00:26:27.509 "uuid": "c19a5606-2d6f-46d2-8ee5-e9060cf83d01" 00:26:27.509 } 00:26:27.509 10:04:16 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:27.771 [2024-12-15 10:04:16.569426] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.771 10:04:16 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:27.771 10:04:16 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:28.032 [2024-12-15 10:04:16.973830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:28.032 10:04:16 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:28.292 [2024-12-15 10:04:17.183108] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:28.292 10:04:17 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:28.553 Fill FTL, iteration 1 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:28.553 10:04:17 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:28.553 10:04:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:28.553 10:04:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:28.553 10:04:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:28.553 10:04:17 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:28.553 10:04:17 -- ftl/common.sh@163 -- # spdk_ini_pid=78767 00:26:28.553 10:04:17 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:28.553 10:04:17 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:28.553 10:04:17 -- ftl/common.sh@165 -- # waitforlisten 78767 /var/tmp/spdk.tgt.sock 00:26:28.553 10:04:17 -- common/autotest_common.sh@829 -- # '[' -z 78767 ']' 00:26:28.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:28.553 10:04:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:28.553 10:04:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.553 10:04:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:28.553 10:04:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.553 10:04:17 -- common/autotest_common.sh@10 -- # set +x 00:26:28.814 [2024-12-15 10:04:17.611392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:28.814 [2024-12-15 10:04:17.611812] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:26:28.814 [2024-12-15 10:04:17.763396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.075 [2024-12-15 10:04:18.000705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:29.075 [2024-12-15 10:04:18.001102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.456 10:04:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.456 10:04:19 -- common/autotest_common.sh@862 -- # return 0 00:26:30.456 10:04:19 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:30.456 ftln1 00:26:30.456 10:04:19 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:30.456 10:04:19 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:30.716 10:04:19 -- ftl/common.sh@173 -- # echo ']}' 00:26:30.716 10:04:19 -- ftl/common.sh@176 -- # killprocess 78767 00:26:30.716 10:04:19 -- common/autotest_common.sh@936 -- # '[' -z 78767 ']' 00:26:30.716 10:04:19 -- common/autotest_common.sh@940 -- # kill -0 78767 00:26:30.716 10:04:19 -- common/autotest_common.sh@941 -- # uname 00:26:30.716 10:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.716 10:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78767 00:26:30.716 killing process with pid 78767 00:26:30.716 10:04:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:30.716 10:04:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:30.716 10:04:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78767' 00:26:30.716 10:04:19 -- common/autotest_common.sh@955 -- # kill 78767 00:26:30.716 10:04:19 -- common/autotest_common.sh@960 -- # wait 78767 00:26:32.628 10:04:21 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:32.629 10:04:21 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:32.629 [2024-12-15 10:04:21.175543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:32.629 [2024-12-15 10:04:21.175628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78818 ] 00:26:32.629 [2024-12-15 10:04:21.317304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.629 [2024-12-15 10:04:21.458017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.019  [2024-12-15T10:04:24.021Z] Copying: 256/1024 [MB] (256 MBps) [2024-12-15T10:04:24.956Z] Copying: 519/1024 [MB] (263 MBps) [2024-12-15T10:04:25.910Z] Copying: 779/1024 [MB] (260 MBps) [2024-12-15T10:04:26.479Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:26:37.463 00:26:37.463 10:04:26 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:37.463 10:04:26 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:37.463 Calculate MD5 checksum, iteration 1 00:26:37.463 10:04:26 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:37.463 10:04:26 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:37.463 10:04:26 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:37.463 10:04:26 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:37.463 10:04:26 -- ftl/common.sh@154 -- # return 0 00:26:37.463 10:04:26 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:37.463 [2024-12-15 10:04:26.442072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:37.463 [2024-12-15 10:04:26.442183] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78878 ] 00:26:37.722 [2024-12-15 10:04:26.591003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.980 [2024-12-15 10:04:26.743093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.355  [2024-12-15T10:04:28.629Z] Copying: 670/1024 [MB] (670 MBps) [2024-12-15T10:04:29.566Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:26:40.550 00:26:40.550 10:04:29 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:40.550 10:04:29 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:42.452 Fill FTL, iteration 2 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=30be73dc738e65b72b6cf599b26db831 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:42.452 10:04:31 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:42.452 10:04:31 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:42.452 10:04:31 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:42.452 10:04:31 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:42.452 10:04:31 -- ftl/common.sh@154 -- # return 0 00:26:42.452 10:04:31 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:42.452 [2024-12-15 10:04:31.360932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.452 [2024-12-15 10:04:31.361179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78936 ] 00:26:42.713 [2024-12-15 10:04:31.509762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.713 [2024-12-15 10:04:31.717376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.627  [2024-12-15T10:04:34.214Z] Copying: 230/1024 [MB] (230 MBps) [2024-12-15T10:04:35.156Z] Copying: 467/1024 [MB] (237 MBps) [2024-12-15T10:04:36.542Z] Copying: 708/1024 [MB] (241 MBps) [2024-12-15T10:04:36.542Z] Copying: 940/1024 [MB] (232 MBps) [2024-12-15T10:04:37.485Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:26:48.469 00:26:48.469 10:04:37 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:48.469 10:04:37 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:48.469 Calculate MD5 checksum, iteration 2 00:26:48.469 10:04:37 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:48.469 10:04:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:48.469 10:04:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:48.469 10:04:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:48.469 10:04:37 -- ftl/common.sh@154 -- # return 0 00:26:48.469 10:04:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:48.469 [2024-12-15 10:04:37.215470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:48.469 [2024-12-15 10:04:37.215733] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79000 ] 00:26:48.469 [2024-12-15 10:04:37.363688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.730 [2024-12-15 10:04:37.527533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.118  [2024-12-15T10:04:39.705Z] Copying: 649/1024 [MB] (649 MBps) [2024-12-15T10:04:41.094Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:26:52.078 00:26:52.078 10:04:40 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:52.078 10:04:40 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8c2de056254b492b9ff58f019fdbcebb 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:53.985 [2024-12-15 10:04:42.819207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.985 [2024-12-15 10:04:42.819353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:53.985 [2024-12-15 10:04:42.819371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:53.985 [2024-12-15 10:04:42.819382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.985 [2024-12-15 10:04:42.819406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.985 [2024-12-15 10:04:42.819414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:53.985 [2024-12-15 10:04:42.819420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:53.985 [2024-12-15 10:04:42.819427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.985 [2024-12-15 10:04:42.819442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.985 [2024-12-15 10:04:42.819449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:53.985 [2024-12-15 10:04:42.819460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:53.985 [2024-12-15 10:04:42.819466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.985 [2024-12-15 10:04:42.819517] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.299 ms, result 0 00:26:53.985 true 00:26:53.985 10:04:42 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.244 { 00:26:54.244 "name": "ftl", 00:26:54.244 "properties": [ 00:26:54.244 { 00:26:54.244 "name": "superblock_version", 00:26:54.244 "value": 5, 00:26:54.244 "read-only": true 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "name": "base_device", 00:26:54.244 "bands": [ 00:26:54.244 { 00:26:54.244 "id": 0, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 1, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 2, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 3, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 4, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 5, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 6, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 7, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 8, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 9, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 10, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 11, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 12, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 13, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 14, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 15, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 16, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 17, 00:26:54.244 "state": "FREE", 00:26:54.244 "validity": 0.0 00:26:54.244 } 00:26:54.244 ], 00:26:54.244 "read-only": true 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "name": "cache_device", 00:26:54.244 "type": "bdev", 00:26:54.244 "chunks": [ 00:26:54.244 { 00:26:54.244 "id": 0, 00:26:54.244 "state": "CLOSED", 00:26:54.244 "utilization": 1.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 1, 00:26:54.244 "state": "CLOSED", 00:26:54.244 "utilization": 1.0 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 2, 00:26:54.244 "state": "OPEN", 00:26:54.244 "utilization": 0.001953125 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "id": 3, 00:26:54.244 "state": "OPEN", 00:26:54.244 "utilization": 0.0 00:26:54.244 } 00:26:54.244 ], 00:26:54.244 "read-only": true 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "name": "verbose_mode", 00:26:54.244 "value": true, 00:26:54.244 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:54.244 }, 00:26:54.244 { 00:26:54.244 "name": "prep_upgrade_on_shutdown", 00:26:54.244 "value": false, 00:26:54.244 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:54.244 } 00:26:54.244 ] 00:26:54.244 } 00:26:54.244 10:04:43 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:54.244 [2024-12-15 10:04:43.191488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.245 [2024-12-15 10:04:43.191519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.245 [2024-12-15 10:04:43.191528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:54.245 [2024-12-15 10:04:43.191534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.245 [2024-12-15 10:04:43.191551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.245 [2024-12-15 10:04:43.191557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.245 [2024-12-15 10:04:43.191563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.245 [2024-12-15 10:04:43.191568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.245 [2024-12-15 10:04:43.191583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.245 [2024-12-15 10:04:43.191588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.245 [2024-12-15 10:04:43.191594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.245 [2024-12-15 10:04:43.191599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.245 [2024-12-15 10:04:43.191638] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.140 ms, result 0 00:26:54.245 true 00:26:54.245 10:04:43 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:54.245 10:04:43 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:54.245 10:04:43 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.503 10:04:43 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:54.503 10:04:43 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:54.503 10:04:43 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:54.761 [2024-12-15 10:04:43.558673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.761 [2024-12-15 10:04:43.558704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.761 [2024-12-15 10:04:43.558714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:54.761 [2024-12-15 10:04:43.558720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.761 [2024-12-15 10:04:43.558736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.761 [2024-12-15 10:04:43.558742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.761 [2024-12-15 10:04:43.558748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.761 [2024-12-15 10:04:43.558753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.761 [2024-12-15 10:04:43.558768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.761 [2024-12-15 10:04:43.558773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.761 [2024-12-15 10:04:43.558779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:54.761 [2024-12-15 10:04:43.558784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.761 [2024-12-15 10:04:43.558823] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.141 ms, result 0 00:26:54.761 true 00:26:54.761 10:04:43 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.761 { 00:26:54.761 "name": "ftl", 00:26:54.761 "properties": [ 00:26:54.761 { 00:26:54.761 "name": "superblock_version", 00:26:54.761 "value": 5, 00:26:54.761 "read-only": true 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "name": "base_device", 00:26:54.761 "bands": [ 00:26:54.761 { 00:26:54.761 "id": 0, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 1, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 2, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 3, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 4, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 5, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 6, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 7, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 8, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 9, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 10, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 11, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 12, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 13, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 14, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 15, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 16, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 17, 00:26:54.761 "state": "FREE", 00:26:54.761 "validity": 0.0 00:26:54.761 } 00:26:54.761 ], 00:26:54.761 "read-only": true 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "name": "cache_device", 00:26:54.761 "type": "bdev", 00:26:54.761 "chunks": [ 00:26:54.761 { 00:26:54.761 "id": 0, 00:26:54.761 "state": "CLOSED", 00:26:54.761 "utilization": 1.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 1, 00:26:54.761 "state": "CLOSED", 00:26:54.761 "utilization": 1.0 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 2, 00:26:54.761 "state": "OPEN", 00:26:54.761 "utilization": 0.001953125 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "id": 3, 00:26:54.761 "state": "OPEN", 00:26:54.761 "utilization": 0.0 00:26:54.761 } 00:26:54.761 ], 00:26:54.761 "read-only": true 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "name": "verbose_mode", 00:26:54.761 "value": true, 00:26:54.762 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:54.762 }, 00:26:54.762 { 00:26:54.762 "name": "prep_upgrade_on_shutdown", 00:26:54.762 "value": true, 00:26:54.762 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:54.762 } 00:26:54.762 ] 00:26:54.762 } 00:26:54.762 10:04:43 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:54.762 10:04:43 -- ftl/common.sh@130 -- # [[ -n 78625 ]] 00:26:54.762 10:04:43 -- ftl/common.sh@131 -- # killprocess 78625 00:26:54.762 10:04:43 -- common/autotest_common.sh@936 -- # '[' -z 78625 ']' 00:26:54.762 10:04:43 -- common/autotest_common.sh@940 -- # kill -0 78625 00:26:54.762 10:04:43 -- common/autotest_common.sh@941 -- # uname 00:26:55.020 10:04:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.020 10:04:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78625 00:26:55.020 killing process with pid 78625 00:26:55.020 10:04:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:55.020 10:04:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:55.020 10:04:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78625' 00:26:55.020 10:04:43 -- common/autotest_common.sh@955 -- # kill 78625 00:26:55.020 10:04:43 -- common/autotest_common.sh@960 -- # wait 78625 00:26:55.589 [2024-12-15 10:04:44.328845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:55.589 [2024-12-15 10:04:44.340531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.589 [2024-12-15 10:04:44.340563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:55.589 [2024-12-15 10:04:44.340574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:55.589 [2024-12-15 10:04:44.340581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.589 [2024-12-15 10:04:44.340598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:55.589 [2024-12-15 10:04:44.342743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.589 [2024-12-15 10:04:44.342766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:55.589 [2024-12-15 10:04:44.342774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.136 ms 00:26:55.589 [2024-12-15 10:04:44.342781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.210177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.210374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:05.593 [2024-12-15 10:04:53.210438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8867.343 ms 00:27:05.593 [2024-12-15 10:04:53.210462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.211563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.211652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:05.593 [2024-12-15 10:04:53.211711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:27:05.593 [2024-12-15 10:04:53.211731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.212614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.212687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:05.593 [2024-12-15 10:04:53.212740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:27:05.593 [2024-12-15 10:04:53.212758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.220810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.220910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:05.593 [2024-12-15 10:04:53.220958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.002 ms 00:27:05.593 [2024-12-15 10:04:53.220975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.226610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.226712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:05.593 [2024-12-15 10:04:53.226760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.603 ms 00:27:05.593 [2024-12-15 10:04:53.226778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.226850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.226871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:05.593 [2024-12-15 10:04:53.226886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:05.593 [2024-12-15 10:04:53.226932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.234399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.234486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:05.593 [2024-12-15 10:04:53.234525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.443 ms 00:27:05.593 [2024-12-15 10:04:53.234542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.241864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.241953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:05.593 [2024-12-15 10:04:53.241995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.292 ms 00:27:05.593 [2024-12-15 10:04:53.242012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.249634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.249720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:05.593 [2024-12-15 10:04:53.249761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.582 ms 00:27:05.593 [2024-12-15 10:04:53.249778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.257520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.257606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:05.593 [2024-12-15 10:04:53.257647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.685 ms 00:27:05.593 [2024-12-15 10:04:53.257664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.257693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:05.593 [2024-12-15 10:04:53.257944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:05.593 [2024-12-15 10:04:53.257991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:05.593 [2024-12-15 10:04:53.258045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:05.593 [2024-12-15 10:04:53.258055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.593 [2024-12-15 10:04:53.258156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:05.593 [2024-12-15 10:04:53.258163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c19a5606-2d6f-46d2-8ee5-e9060cf83d01 00:27:05.593 [2024-12-15 10:04:53.258170] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:05.593 [2024-12-15 10:04:53.258176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:05.593 [2024-12-15 10:04:53.258182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:05.593 [2024-12-15 10:04:53.258189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:05.593 [2024-12-15 10:04:53.258195] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:05.593 [2024-12-15 10:04:53.258201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:05.593 [2024-12-15 10:04:53.258209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:05.593 [2024-12-15 10:04:53.258214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:05.593 [2024-12-15 10:04:53.258219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:05.593 [2024-12-15 10:04:53.258226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.258234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:05.593 [2024-12-15 10:04:53.258242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:27:05.593 [2024-12-15 10:04:53.258247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.268412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.268440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:05.593 [2024-12-15 10:04:53.268449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.112 ms 00:27:05.593 [2024-12-15 10:04:53.268461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.268626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.593 [2024-12-15 10:04:53.268633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:05.593 [2024-12-15 10:04:53.268640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:27:05.593 [2024-12-15 10:04:53.268645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.305420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.593 [2024-12-15 10:04:53.305452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:05.593 [2024-12-15 10:04:53.305461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.593 [2024-12-15 10:04:53.305471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.593 [2024-12-15 10:04:53.305502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.593 [2024-12-15 10:04:53.305509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:05.593 [2024-12-15 10:04:53.305515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.305522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.305575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.305584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:05.594 [2024-12-15 10:04:53.305591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.305597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.305614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.305620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:05.594 [2024-12-15 10:04:53.305626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.305632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.367511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.367660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:05.594 [2024-12-15 10:04:53.367675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.367683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:05.594 [2024-12-15 10:04:53.391466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:05.594 [2024-12-15 10:04:53.391542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:05.594 [2024-12-15 10:04:53.391599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:05.594 [2024-12-15 10:04:53.391695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:05.594 [2024-12-15 10:04:53.391746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:05.594 [2024-12-15 10:04:53.391802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:05.594 [2024-12-15 10:04:53.391861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:05.594 [2024-12-15 10:04:53.391867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:05.594 [2024-12-15 10:04:53.391873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.594 [2024-12-15 10:04:53.391983] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9051.391 ms, result 0 00:27:10.928 10:04:59 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:10.928 10:04:59 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:10.928 10:04:59 -- ftl/common.sh@81 -- # local base_bdev= 00:27:10.928 10:04:59 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:10.928 10:04:59 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:10.928 10:04:59 -- ftl/common.sh@89 -- # spdk_tgt_pid=79214 00:27:10.928 10:04:59 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:10.928 10:04:59 -- ftl/common.sh@91 -- # waitforlisten 79214 00:27:10.928 10:04:59 -- common/autotest_common.sh@829 -- # '[' -z 79214 ']' 00:27:10.928 10:04:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.928 10:04:59 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:10.928 10:04:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.928 10:04:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.928 10:04:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.928 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:10.928 [2024-12-15 10:04:59.334831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:10.928 [2024-12-15 10:04:59.334977] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79214 ] 00:27:10.928 [2024-12-15 10:04:59.489598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.928 [2024-12-15 10:04:59.730194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:10.928 [2024-12-15 10:04:59.730647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.500 [2024-12-15 10:05:00.479947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:11.500 [2024-12-15 10:05:00.480033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:11.762 [2024-12-15 10:05:00.627444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.627503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:11.762 [2024-12-15 10:05:00.627518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:11.762 [2024-12-15 10:05:00.627526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.627590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.627604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:11.762 [2024-12-15 10:05:00.627613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:11.762 [2024-12-15 10:05:00.627620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.627643] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:11.762 [2024-12-15 10:05:00.628435] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:11.762 [2024-12-15 10:05:00.628474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.628483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:11.762 [2024-12-15 10:05:00.628492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:27:11.762 [2024-12-15 10:05:00.628500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.630337] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:11.762 [2024-12-15 10:05:00.645053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.645365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:11.762 [2024-12-15 10:05:00.645390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.718 ms 00:27:11.762 [2024-12-15 10:05:00.645399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.645481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.645493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:11.762 [2024-12-15 10:05:00.645501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:11.762 [2024-12-15 10:05:00.645510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.654120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.654165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:11.762 [2024-12-15 10:05:00.654177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.522 ms 00:27:11.762 [2024-12-15 10:05:00.654194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.654377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.654390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:11.762 [2024-12-15 10:05:00.654399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.156 ms 00:27:11.762 [2024-12-15 10:05:00.654407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.654456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.654466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:11.762 [2024-12-15 10:05:00.654477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:11.762 [2024-12-15 10:05:00.654485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.654523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:11.762 [2024-12-15 10:05:00.658749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.658792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:11.762 [2024-12-15 10:05:00.658807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.238 ms 00:27:11.762 [2024-12-15 10:05:00.658815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.658852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.762 [2024-12-15 10:05:00.658861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:11.762 [2024-12-15 10:05:00.658870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:11.762 [2024-12-15 10:05:00.658878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.762 [2024-12-15 10:05:00.658932] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:11.762 [2024-12-15 10:05:00.658955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:11.762 [2024-12-15 10:05:00.658993] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:11.762 [2024-12-15 10:05:00.659015] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:11.762 [2024-12-15 10:05:00.659094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:11.762 [2024-12-15 10:05:00.659105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:11.762 [2024-12-15 10:05:00.659118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:11.762 [2024-12-15 10:05:00.659129] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:11.762 [2024-12-15 10:05:00.659138] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:11.762 [2024-12-15 10:05:00.659146] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:11.762 [2024-12-15 10:05:00.659158] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:11.763 [2024-12-15 10:05:00.659166] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:11.763 [2024-12-15 10:05:00.659176] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:11.763 [2024-12-15 10:05:00.659186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.659195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:11.763 [2024-12-15 10:05:00.659203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:27:11.763 [2024-12-15 10:05:00.659211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.659304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.659316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:11.763 [2024-12-15 10:05:00.659325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:27:11.763 [2024-12-15 10:05:00.659332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.659415] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:11.763 [2024-12-15 10:05:00.659428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:11.763 [2024-12-15 10:05:00.659436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:11.763 [2024-12-15 10:05:00.659461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:11.763 [2024-12-15 10:05:00.659477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:11.763 [2024-12-15 10:05:00.659485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:11.763 [2024-12-15 10:05:00.659492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:11.763 [2024-12-15 10:05:00.659507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:11.763 [2024-12-15 10:05:00.659515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:11.763 [2024-12-15 10:05:00.659529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:11.763 [2024-12-15 10:05:00.659548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:11.763 [2024-12-15 10:05:00.659554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:11.763 [2024-12-15 10:05:00.659569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:11.763 [2024-12-15 10:05:00.659576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:11.763 [2024-12-15 10:05:00.659589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:11.763 [2024-12-15 10:05:00.659608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:11.763 [2024-12-15 10:05:00.659626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:11.763 [2024-12-15 10:05:00.659648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:11.763 [2024-12-15 10:05:00.659667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:11.763 [2024-12-15 10:05:00.659687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659700] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:11.763 [2024-12-15 10:05:00.659707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:11.763 [2024-12-15 10:05:00.659715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.763 [2024-12-15 10:05:00.659733] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:11.763 [2024-12-15 10:05:00.659740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:11.763 [2024-12-15 10:05:00.659747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:11.763 [2024-12-15 10:05:00.659754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:11.763 [2024-12-15 10:05:00.659762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:11.763 [2024-12-15 10:05:00.659768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:11.763 [2024-12-15 10:05:00.659776] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:11.763 [2024-12-15 10:05:00.659785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:11.763 [2024-12-15 10:05:00.659808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:11.763 [2024-12-15 10:05:00.659830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:11.763 [2024-12-15 10:05:00.659844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:11.763 [2024-12-15 10:05:00.659851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:11.763 [2024-12-15 10:05:00.659858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:11.763 [2024-12-15 10:05:00.659896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:11.763 [2024-12-15 10:05:00.659903] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:11.763 [2024-12-15 10:05:00.659911] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:11.763 [2024-12-15 10:05:00.659926] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:11.763 [2024-12-15 10:05:00.659933] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:11.763 [2024-12-15 10:05:00.659941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:11.763 [2024-12-15 10:05:00.659950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.659957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:11.763 [2024-12-15 10:05:00.659965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.579 ms 00:27:11.763 [2024-12-15 10:05:00.659973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.678427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.678476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:11.763 [2024-12-15 10:05:00.678489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.405 ms 00:27:11.763 [2024-12-15 10:05:00.678498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.678545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.678554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:11.763 [2024-12-15 10:05:00.678562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:11.763 [2024-12-15 10:05:00.678570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.714496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.714544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:11.763 [2024-12-15 10:05:00.714556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.861 ms 00:27:11.763 [2024-12-15 10:05:00.714565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.714606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.714615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:11.763 [2024-12-15 10:05:00.714625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:11.763 [2024-12-15 10:05:00.714632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.763 [2024-12-15 10:05:00.715186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.763 [2024-12-15 10:05:00.715229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:11.764 [2024-12-15 10:05:00.715241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.501 ms 00:27:11.764 [2024-12-15 10:05:00.715249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.764 [2024-12-15 10:05:00.715318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.764 [2024-12-15 10:05:00.715328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:11.764 [2024-12-15 10:05:00.715337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:11.764 [2024-12-15 10:05:00.715344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.764 [2024-12-15 10:05:00.734297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.764 [2024-12-15 10:05:00.734341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:11.764 [2024-12-15 10:05:00.734353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.926 ms 00:27:11.764 [2024-12-15 10:05:00.734362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.764 [2024-12-15 10:05:00.749056] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:11.764 [2024-12-15 10:05:00.749294] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:11.764 [2024-12-15 10:05:00.749314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.764 [2024-12-15 10:05:00.749322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:11.764 [2024-12-15 10:05:00.749334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.827 ms 00:27:11.764 [2024-12-15 10:05:00.749352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.764 [2024-12-15 10:05:00.764626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.764 [2024-12-15 10:05:00.764840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:11.764 [2024-12-15 10:05:00.764863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.222 ms 00:27:11.764 [2024-12-15 10:05:00.764872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.777701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.777751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:12.025 [2024-12-15 10:05:00.777764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.696 ms 00:27:12.025 [2024-12-15 10:05:00.777773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.790666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.790715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:12.025 [2024-12-15 10:05:00.790727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.738 ms 00:27:12.025 [2024-12-15 10:05:00.790734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.791142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.791159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:12.025 [2024-12-15 10:05:00.791168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:27:12.025 [2024-12-15 10:05:00.791177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.858769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.858991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:12.025 [2024-12-15 10:05:00.859016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 67.569 ms 00:27:12.025 [2024-12-15 10:05:00.859025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.870674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:12.025 [2024-12-15 10:05:00.871677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.871724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:12.025 [2024-12-15 10:05:00.871736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.599 ms 00:27:12.025 [2024-12-15 10:05:00.871751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.871832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.871843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:12.025 [2024-12-15 10:05:00.871852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:12.025 [2024-12-15 10:05:00.871862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.871922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.871935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:12.025 [2024-12-15 10:05:00.871945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:12.025 [2024-12-15 10:05:00.871954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.873490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.873674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:12.025 [2024-12-15 10:05:00.873696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.509 ms 00:27:12.025 [2024-12-15 10:05:00.873704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.025 [2024-12-15 10:05:00.873750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.025 [2024-12-15 10:05:00.873759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:12.025 [2024-12-15 10:05:00.873769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:12.025 [2024-12-15 10:05:00.873777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.026 [2024-12-15 10:05:00.873819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:12.026 [2024-12-15 10:05:00.873829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.026 [2024-12-15 10:05:00.873842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:12.026 [2024-12-15 10:05:00.873851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:12.026 [2024-12-15 10:05:00.873859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.026 [2024-12-15 10:05:00.899791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.026 [2024-12-15 10:05:00.899840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:12.026 [2024-12-15 10:05:00.899853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.908 ms 00:27:12.026 [2024-12-15 10:05:00.899862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.026 [2024-12-15 10:05:00.899970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.026 [2024-12-15 10:05:00.899981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:12.026 [2024-12-15 10:05:00.899991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:12.026 [2024-12-15 10:05:00.899999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.026 [2024-12-15 10:05:00.901335] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 273.387 ms, result 0 00:27:12.026 [2024-12-15 10:05:00.916198] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.026 [2024-12-15 10:05:00.932215] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:12.026 [2024-12-15 10:05:00.940476] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:12.593 10:05:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.593 10:05:01 -- common/autotest_common.sh@862 -- # return 0 00:27:12.593 10:05:01 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.593 10:05:01 -- ftl/common.sh@95 -- # return 0 00:27:12.593 10:05:01 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:12.851 [2024-12-15 10:05:01.709870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.851 [2024-12-15 10:05:01.709904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:12.851 [2024-12-15 10:05:01.709913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:12.851 [2024-12-15 10:05:01.709920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.851 [2024-12-15 10:05:01.709937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.851 [2024-12-15 10:05:01.709944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:12.851 [2024-12-15 10:05:01.709950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:12.851 [2024-12-15 10:05:01.709958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.851 [2024-12-15 10:05:01.709973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.851 [2024-12-15 10:05:01.709979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:12.851 [2024-12-15 10:05:01.709985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:12.851 [2024-12-15 10:05:01.709991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.851 [2024-12-15 10:05:01.710033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.158 ms, result 0 00:27:12.851 true 00:27:12.851 10:05:01 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:13.109 { 00:27:13.109 "name": "ftl", 00:27:13.109 "properties": [ 00:27:13.109 { 00:27:13.109 "name": "superblock_version", 00:27:13.109 "value": 5, 00:27:13.109 "read-only": true 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "name": "base_device", 00:27:13.109 "bands": [ 00:27:13.109 { 00:27:13.109 "id": 0, 00:27:13.109 "state": "CLOSED", 00:27:13.109 "validity": 1.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 1, 00:27:13.109 "state": "CLOSED", 00:27:13.109 "validity": 1.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 2, 00:27:13.109 "state": "CLOSED", 00:27:13.109 "validity": 0.007843137254901933 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 3, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 4, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 5, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 6, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 7, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 8, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 9, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 10, 00:27:13.109 "state": "FREE", 00:27:13.109 "validity": 0.0 00:27:13.109 }, 00:27:13.109 { 00:27:13.109 "id": 11, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 12, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 13, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 14, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 15, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 16, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 17, 00:27:13.110 "state": "FREE", 00:27:13.110 "validity": 0.0 00:27:13.110 } 00:27:13.110 ], 00:27:13.110 "read-only": true 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "name": "cache_device", 00:27:13.110 "type": "bdev", 00:27:13.110 "chunks": [ 00:27:13.110 { 00:27:13.110 "id": 0, 00:27:13.110 "state": "OPEN", 00:27:13.110 "utilization": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 1, 00:27:13.110 "state": "OPEN", 00:27:13.110 "utilization": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 2, 00:27:13.110 "state": "FREE", 00:27:13.110 "utilization": 0.0 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "id": 3, 00:27:13.110 "state": "FREE", 00:27:13.110 "utilization": 0.0 00:27:13.110 } 00:27:13.110 ], 00:27:13.110 "read-only": true 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "name": "verbose_mode", 00:27:13.110 "value": true, 00:27:13.110 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:13.110 }, 00:27:13.110 { 00:27:13.110 "name": "prep_upgrade_on_shutdown", 00:27:13.110 "value": false, 00:27:13.110 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:13.110 } 00:27:13.110 ] 00:27:13.110 } 00:27:13.110 10:05:01 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:13.110 10:05:01 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:13.110 10:05:01 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:13.110 10:05:02 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:13.110 10:05:02 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:13.110 10:05:02 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:13.110 10:05:02 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:13.110 10:05:02 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:13.368 Validate MD5 checksum, iteration 1 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:13.368 10:05:02 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:13.368 10:05:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.368 10:05:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.368 10:05:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.368 10:05:02 -- ftl/common.sh@154 -- # return 0 00:27:13.368 10:05:02 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:13.368 [2024-12-15 10:05:02.374096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:13.368 [2024-12-15 10:05:02.374206] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79269 ] 00:27:13.626 [2024-12-15 10:05:02.524438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.885 [2024-12-15 10:05:02.719563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.260  [2024-12-15T10:05:05.228Z] Copying: 724/1024 [MB] (724 MBps) [2024-12-15T10:05:06.612Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:27:17.596 00:27:17.596 10:05:06 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:17.596 10:05:06 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:20.124 Validate MD5 checksum, iteration 2 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@103 -- # sum=30be73dc738e65b72b6cf599b26db831 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@105 -- # [[ 30be73dc738e65b72b6cf599b26db831 != \3\0\b\e\7\3\d\c\7\3\8\e\6\5\b\7\2\b\6\c\f\5\9\9\b\2\6\d\b\8\3\1 ]] 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:20.124 10:05:08 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:20.124 10:05:08 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.124 10:05:08 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.124 10:05:08 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.124 10:05:08 -- ftl/common.sh@154 -- # return 0 00:27:20.124 10:05:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:20.124 [2024-12-15 10:05:08.649292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:20.124 [2024-12-15 10:05:08.649396] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79342 ] 00:27:20.124 [2024-12-15 10:05:08.797938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.124 [2024-12-15 10:05:08.990175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.505  [2024-12-15T10:05:11.463Z] Copying: 584/1024 [MB] (584 MBps) [2024-12-15T10:05:12.405Z] Copying: 1024/1024 [MB] (average 600 MBps) 00:27:23.389 00:27:23.389 10:05:12 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:23.389 10:05:12 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@103 -- # sum=8c2de056254b492b9ff58f019fdbcebb 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c2de056254b492b9ff58f019fdbcebb != \8\c\2\d\e\0\5\6\2\5\4\b\4\9\2\b\9\f\f\5\8\f\0\1\9\f\d\b\c\e\b\b ]] 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:25.924 10:05:14 -- ftl/common.sh@137 -- # [[ -n 79214 ]] 00:27:25.924 10:05:14 -- ftl/common.sh@138 -- # kill -9 79214 00:27:25.924 10:05:14 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:25.924 10:05:14 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:25.924 10:05:14 -- ftl/common.sh@81 -- # local base_bdev= 00:27:25.924 10:05:14 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:25.924 10:05:14 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:25.924 10:05:14 -- ftl/common.sh@89 -- # spdk_tgt_pid=79403 00:27:25.924 10:05:14 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:25.924 10:05:14 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:25.924 10:05:14 -- ftl/common.sh@91 -- # waitforlisten 79403 00:27:25.924 10:05:14 -- common/autotest_common.sh@829 -- # '[' -z 79403 ']' 00:27:25.924 10:05:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.924 10:05:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.924 10:05:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.924 10:05:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.924 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:27:25.924 [2024-12-15 10:05:14.435877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:25.924 [2024-12-15 10:05:14.435982] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79403 ] 00:27:25.924 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79214 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:25.924 [2024-12-15 10:05:14.583058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.924 [2024-12-15 10:05:14.723318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.924 [2024-12-15 10:05:14.723464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.495 [2024-12-15 10:05:15.341773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:26.495 [2024-12-15 10:05:15.341853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:26.495 [2024-12-15 10:05:15.484908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.495 [2024-12-15 10:05:15.484962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:26.495 [2024-12-15 10:05:15.484976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:26.495 [2024-12-15 10:05:15.484985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.495 [2024-12-15 10:05:15.485045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.495 [2024-12-15 10:05:15.485059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:26.495 [2024-12-15 10:05:15.485069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:26.495 [2024-12-15 10:05:15.485076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.495 [2024-12-15 10:05:15.485099] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:26.495 [2024-12-15 10:05:15.486112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:26.495 [2024-12-15 10:05:15.486154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.495 [2024-12-15 10:05:15.486165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:26.495 [2024-12-15 10:05:15.486175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:27:26.495 [2024-12-15 10:05:15.486182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.495 [2024-12-15 10:05:15.486611] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:26.495 [2024-12-15 10:05:15.505108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.495 [2024-12-15 10:05:15.505161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:26.495 [2024-12-15 10:05:15.505174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.497 ms 00:27:26.495 [2024-12-15 10:05:15.505182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.514761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.514807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:26.756 [2024-12-15 10:05:15.514818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:26.756 [2024-12-15 10:05:15.514826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.515167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.515180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:26.756 [2024-12-15 10:05:15.515190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:27:26.756 [2024-12-15 10:05:15.515198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.515233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.515242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:26.756 [2024-12-15 10:05:15.515250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:26.756 [2024-12-15 10:05:15.515298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.515331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.515349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:26.756 [2024-12-15 10:05:15.515358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:26.756 [2024-12-15 10:05:15.515366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.515401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:26.756 [2024-12-15 10:05:15.518832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.519057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:26.756 [2024-12-15 10:05:15.519078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.441 ms 00:27:26.756 [2024-12-15 10:05:15.519086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.756 [2024-12-15 10:05:15.519125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.756 [2024-12-15 10:05:15.519134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:26.756 [2024-12-15 10:05:15.519145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:26.757 [2024-12-15 10:05:15.519153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.757 [2024-12-15 10:05:15.519194] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:26.757 [2024-12-15 10:05:15.519216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:26.757 [2024-12-15 10:05:15.519276] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:26.757 [2024-12-15 10:05:15.519294] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:26.757 [2024-12-15 10:05:15.519370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:26.757 [2024-12-15 10:05:15.519384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:26.757 [2024-12-15 10:05:15.519398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:26.757 [2024-12-15 10:05:15.519409] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519417] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519426] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:26.757 [2024-12-15 10:05:15.519436] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:26.757 [2024-12-15 10:05:15.519446] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:26.757 [2024-12-15 10:05:15.519453] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:26.757 [2024-12-15 10:05:15.519461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.757 [2024-12-15 10:05:15.519469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:26.757 [2024-12-15 10:05:15.519477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:27:26.757 [2024-12-15 10:05:15.519487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.757 [2024-12-15 10:05:15.519555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.757 [2024-12-15 10:05:15.519565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:26.757 [2024-12-15 10:05:15.519573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:26.757 [2024-12-15 10:05:15.519581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.757 [2024-12-15 10:05:15.519656] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:26.757 [2024-12-15 10:05:15.519668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:26.757 [2024-12-15 10:05:15.519677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:26.757 [2024-12-15 10:05:15.519706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:26.757 [2024-12-15 10:05:15.519723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:26.757 [2024-12-15 10:05:15.519731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:26.757 [2024-12-15 10:05:15.519739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:26.757 [2024-12-15 10:05:15.519755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:26.757 [2024-12-15 10:05:15.519762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:26.757 [2024-12-15 10:05:15.519775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:26.757 [2024-12-15 10:05:15.519795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:26.757 [2024-12-15 10:05:15.519802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:26.757 [2024-12-15 10:05:15.519816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:26.757 [2024-12-15 10:05:15.519822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:26.757 [2024-12-15 10:05:15.519835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:26.757 [2024-12-15 10:05:15.519842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:26.757 [2024-12-15 10:05:15.519855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:26.757 [2024-12-15 10:05:15.519862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:26.757 [2024-12-15 10:05:15.519877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:26.757 [2024-12-15 10:05:15.519883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:26.757 [2024-12-15 10:05:15.519896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:26.757 [2024-12-15 10:05:15.519903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:26.757 [2024-12-15 10:05:15.519915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:26.757 [2024-12-15 10:05:15.519924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:26.757 [2024-12-15 10:05:15.519939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519951] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:26.757 [2024-12-15 10:05:15.519959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:26.757 [2024-12-15 10:05:15.519967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:26.757 [2024-12-15 10:05:15.519975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:26.757 [2024-12-15 10:05:15.519983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:26.757 [2024-12-15 10:05:15.519990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:26.757 [2024-12-15 10:05:15.519998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:26.757 [2024-12-15 10:05:15.520005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:26.757 [2024-12-15 10:05:15.520012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:26.757 [2024-12-15 10:05:15.520018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:26.757 [2024-12-15 10:05:15.520026] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:26.757 [2024-12-15 10:05:15.520035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520044] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:26.757 [2024-12-15 10:05:15.520051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:26.757 [2024-12-15 10:05:15.520079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:26.757 [2024-12-15 10:05:15.520087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:26.757 [2024-12-15 10:05:15.520094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:26.757 [2024-12-15 10:05:15.520101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:26.757 [2024-12-15 10:05:15.520136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:26.757 [2024-12-15 10:05:15.520143] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:26.757 [2024-12-15 10:05:15.520153] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520160] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:26.757 [2024-12-15 10:05:15.520168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:26.757 [2024-12-15 10:05:15.520175] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:26.757 [2024-12-15 10:05:15.520182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:26.758 [2024-12-15 10:05:15.520191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.520198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:26.758 [2024-12-15 10:05:15.520206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.578 ms 00:27:26.758 [2024-12-15 10:05:15.520222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.535802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.535844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:26.758 [2024-12-15 10:05:15.535859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.518 ms 00:27:26.758 [2024-12-15 10:05:15.535868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.535909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.535917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:26.758 [2024-12-15 10:05:15.535926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:26.758 [2024-12-15 10:05:15.535933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.570777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.570820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:26.758 [2024-12-15 10:05:15.570831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 34.792 ms 00:27:26.758 [2024-12-15 10:05:15.570839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.570873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.570882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:26.758 [2024-12-15 10:05:15.570890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:26.758 [2024-12-15 10:05:15.570899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.570999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.571011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:26.758 [2024-12-15 10:05:15.571024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:26.758 [2024-12-15 10:05:15.571032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.571071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.571085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:26.758 [2024-12-15 10:05:15.571095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:26.758 [2024-12-15 10:05:15.571103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.589300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.589479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:26.758 [2024-12-15 10:05:15.589499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.174 ms 00:27:26.758 [2024-12-15 10:05:15.589507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.589620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.589631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:26.758 [2024-12-15 10:05:15.589640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:26.758 [2024-12-15 10:05:15.589648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.608580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.608777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:26.758 [2024-12-15 10:05:15.608800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.912 ms 00:27:26.758 [2024-12-15 10:05:15.608809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.618535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.618576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:26.758 [2024-12-15 10:05:15.618587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:27:26.758 [2024-12-15 10:05:15.618596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.677369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.677406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:26.758 [2024-12-15 10:05:15.677417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 58.715 ms 00:27:26.758 [2024-12-15 10:05:15.677425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.677504] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:26.758 [2024-12-15 10:05:15.677545] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:26.758 [2024-12-15 10:05:15.677583] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:26.758 [2024-12-15 10:05:15.677622] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:26.758 [2024-12-15 10:05:15.677629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.677636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:26.758 [2024-12-15 10:05:15.677646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:27:26.758 [2024-12-15 10:05:15.677656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.677702] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:26.758 [2024-12-15 10:05:15.677712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.677720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:26.758 [2024-12-15 10:05:15.677727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:26.758 [2024-12-15 10:05:15.677734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.692913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.692946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:26.758 [2024-12-15 10:05:15.692957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.158 ms 00:27:26.758 [2024-12-15 10:05:15.692964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.701382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.701415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:26.758 [2024-12-15 10:05:15.701424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:26.758 [2024-12-15 10:05:15.701432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.701481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.758 [2024-12-15 10:05:15.701490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:26.758 [2024-12-15 10:05:15.701498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:26.758 [2024-12-15 10:05:15.701505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.758 [2024-12-15 10:05:15.701653] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:27.330 [2024-12-15 10:05:16.317464] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:27.330 [2024-12-15 10:05:16.317611] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:27.898 [2024-12-15 10:05:16.869655] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:27.898 [2024-12-15 10:05:16.869783] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:27.898 [2024-12-15 10:05:16.869798] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:27.898 [2024-12-15 10:05:16.869810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.898 [2024-12-15 10:05:16.869821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:27.898 [2024-12-15 10:05:16.869834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1168.280 ms 00:27:27.898 [2024-12-15 10:05:16.869843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.898 [2024-12-15 10:05:16.869892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.898 [2024-12-15 10:05:16.869902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:27.898 [2024-12-15 10:05:16.869913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:27.898 [2024-12-15 10:05:16.869921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.898 [2024-12-15 10:05:16.882108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:27.898 [2024-12-15 10:05:16.882238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.898 [2024-12-15 10:05:16.882249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:27.898 [2024-12-15 10:05:16.882287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.299 ms 00:27:27.898 [2024-12-15 10:05:16.882295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.898 [2024-12-15 10:05:16.882980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.898 [2024-12-15 10:05:16.883009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:27.898 [2024-12-15 10:05:16.883019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.611 ms 00:27:27.898 [2024-12-15 10:05:16.883027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.898 [2024-12-15 10:05:16.885285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.898 [2024-12-15 10:05:16.885312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:27.898 [2024-12-15 10:05:16.885322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.241 ms 00:27:27.898 [2024-12-15 10:05:16.885330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.158 [2024-12-15 10:05:16.911777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.158 [2024-12-15 10:05:16.911834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:28.158 [2024-12-15 10:05:16.911847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.421 ms 00:27:28.158 [2024-12-15 10:05:16.911855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.158 [2024-12-15 10:05:16.911991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.158 [2024-12-15 10:05:16.912005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:28.158 [2024-12-15 10:05:16.912015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:28.158 [2024-12-15 10:05:16.912025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.158 [2024-12-15 10:05:16.913591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.158 [2024-12-15 10:05:16.913641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:28.158 [2024-12-15 10:05:16.913652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.545 ms 00:27:28.158 [2024-12-15 10:05:16.913660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.159 [2024-12-15 10:05:16.913699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.159 [2024-12-15 10:05:16.913707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:28.159 [2024-12-15 10:05:16.913716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:28.159 [2024-12-15 10:05:16.913724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.159 [2024-12-15 10:05:16.913761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:28.159 [2024-12-15 10:05:16.913773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.159 [2024-12-15 10:05:16.913781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:28.159 [2024-12-15 10:05:16.913793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:28.159 [2024-12-15 10:05:16.913801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.159 [2024-12-15 10:05:16.913861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.159 [2024-12-15 10:05:16.913871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:28.159 [2024-12-15 10:05:16.913880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:28.159 [2024-12-15 10:05:16.913888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.159 [2024-12-15 10:05:16.915004] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1429.621 ms, result 0 00:27:28.159 [2024-12-15 10:05:16.928207] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.159 [2024-12-15 10:05:16.944202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:28.159 [2024-12-15 10:05:16.952415] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:28.419 Validate MD5 checksum, iteration 1 00:27:28.419 10:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.419 10:05:17 -- common/autotest_common.sh@862 -- # return 0 00:27:28.419 10:05:17 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.419 10:05:17 -- ftl/common.sh@95 -- # return 0 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:28.419 10:05:17 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:28.419 10:05:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:28.419 10:05:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:28.419 10:05:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:28.419 10:05:17 -- ftl/common.sh@154 -- # return 0 00:27:28.419 10:05:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:28.419 [2024-12-15 10:05:17.348491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:28.419 [2024-12-15 10:05:17.348871] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79442 ] 00:27:28.680 [2024-12-15 10:05:17.496048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.680 [2024-12-15 10:05:17.671964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.594  [2024-12-15T10:05:19.870Z] Copying: 652/1024 [MB] (652 MBps) [2024-12-15T10:05:20.810Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:27:31.794 00:27:31.794 10:05:20 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:31.794 10:05:20 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@103 -- # sum=30be73dc738e65b72b6cf599b26db831 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@105 -- # [[ 30be73dc738e65b72b6cf599b26db831 != \3\0\b\e\7\3\d\c\7\3\8\e\6\5\b\7\2\b\6\c\f\5\9\9\b\2\6\d\b\8\3\1 ]] 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:34.324 Validate MD5 checksum, iteration 2 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:34.324 10:05:22 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:34.324 10:05:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:34.324 10:05:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:34.324 10:05:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:34.324 10:05:22 -- ftl/common.sh@154 -- # return 0 00:27:34.324 10:05:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:34.324 [2024-12-15 10:05:22.879886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:34.324 [2024-12-15 10:05:22.880081] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79504 ] 00:27:34.324 [2024-12-15 10:05:23.023221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.324 [2024-12-15 10:05:23.186603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.704  [2024-12-15T10:05:25.321Z] Copying: 675/1024 [MB] (675 MBps) [2024-12-15T10:05:29.515Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:27:40.499 00:27:40.499 10:05:29 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:40.499 10:05:29 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@103 -- # sum=8c2de056254b492b9ff58f019fdbcebb 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c2de056254b492b9ff58f019fdbcebb != \8\c\2\d\e\0\5\6\2\5\4\b\4\9\2\b\9\f\f\5\8\f\0\1\9\f\d\b\c\e\b\b ]] 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:43.035 10:05:31 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:43.035 10:05:31 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:43.035 10:05:31 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:43.035 10:05:31 -- ftl/common.sh@130 -- # [[ -n 79403 ]] 00:27:43.035 10:05:31 -- ftl/common.sh@131 -- # killprocess 79403 00:27:43.035 10:05:31 -- common/autotest_common.sh@936 -- # '[' -z 79403 ']' 00:27:43.035 10:05:31 -- common/autotest_common.sh@940 -- # kill -0 79403 00:27:43.035 10:05:31 -- common/autotest_common.sh@941 -- # uname 00:27:43.035 10:05:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:43.035 10:05:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79403 00:27:43.035 killing process with pid 79403 00:27:43.035 10:05:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:43.035 10:05:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:43.035 10:05:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79403' 00:27:43.035 10:05:31 -- common/autotest_common.sh@955 -- # kill 79403 00:27:43.035 10:05:31 -- common/autotest_common.sh@960 -- # wait 79403 00:27:43.295 [2024-12-15 10:05:32.173845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:43.295 [2024-12-15 10:05:32.186568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.186601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:43.295 [2024-12-15 10:05:32.186611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:43.295 [2024-12-15 10:05:32.186618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.186635] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:43.295 [2024-12-15 10:05:32.188650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.188675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:43.295 [2024-12-15 10:05:32.188684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.004 ms 00:27:43.295 [2024-12-15 10:05:32.188689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.188884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.188895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:43.295 [2024-12-15 10:05:32.188902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:27:43.295 [2024-12-15 10:05:32.188908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.190448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.190486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:43.295 [2024-12-15 10:05:32.190503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:27:43.295 [2024-12-15 10:05:32.190517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.191396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.191487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:43.295 [2024-12-15 10:05:32.191584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:27:43.295 [2024-12-15 10:05:32.191604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.199083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.199188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:43.295 [2024-12-15 10:05:32.199235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.440 ms 00:27:43.295 [2024-12-15 10:05:32.199269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.203490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.203604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:43.295 [2024-12-15 10:05:32.203700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.174 ms 00:27:43.295 [2024-12-15 10:05:32.203718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.204036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.204278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:43.295 [2024-12-15 10:05:32.204300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:43.295 [2024-12-15 10:05:32.204321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.211466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.211553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:43.295 [2024-12-15 10:05:32.211611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.116 ms 00:27:43.295 [2024-12-15 10:05:32.211628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.218872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.218950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:43.295 [2024-12-15 10:05:32.218985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.201 ms 00:27:43.295 [2024-12-15 10:05:32.219000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.226066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.226146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:43.295 [2024-12-15 10:05:32.226183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.036 ms 00:27:43.295 [2024-12-15 10:05:32.226199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.233411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.233488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:43.295 [2024-12-15 10:05:32.233529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.146 ms 00:27:43.295 [2024-12-15 10:05:32.233545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.233575] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:43.295 [2024-12-15 10:05:32.233595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.295 [2024-12-15 10:05:32.233619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.295 [2024-12-15 10:05:32.233677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:43.295 [2024-12-15 10:05:32.233701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.233978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.234001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.234008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.234015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.234021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.295 [2024-12-15 10:05:32.234028] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:43.295 [2024-12-15 10:05:32.234036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c19a5606-2d6f-46d2-8ee5-e9060cf83d01 00:27:43.295 [2024-12-15 10:05:32.234042] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:43.295 [2024-12-15 10:05:32.234048] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:43.295 [2024-12-15 10:05:32.234054] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:43.295 [2024-12-15 10:05:32.234060] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:43.295 [2024-12-15 10:05:32.234066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:43.295 [2024-12-15 10:05:32.234072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:43.295 [2024-12-15 10:05:32.234078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:43.295 [2024-12-15 10:05:32.234083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:43.295 [2024-12-15 10:05:32.234088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:43.295 [2024-12-15 10:05:32.234093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.234099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:43.295 [2024-12-15 10:05:32.234105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:27:43.295 [2024-12-15 10:05:32.234111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.243519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.243541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:43.295 [2024-12-15 10:05:32.243550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.390 ms 00:27:43.295 [2024-12-15 10:05:32.243556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.243703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.295 [2024-12-15 10:05:32.243710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:43.295 [2024-12-15 10:05:32.243720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:27:43.295 [2024-12-15 10:05:32.243725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.278661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.295 [2024-12-15 10:05:32.278686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:43.295 [2024-12-15 10:05:32.278694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.295 [2024-12-15 10:05:32.278701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.295 [2024-12-15 10:05:32.278725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.295 [2024-12-15 10:05:32.278731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:43.295 [2024-12-15 10:05:32.278740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.295 [2024-12-15 10:05:32.278746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.296 [2024-12-15 10:05:32.278794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.296 [2024-12-15 10:05:32.278802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:43.296 [2024-12-15 10:05:32.278808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.296 [2024-12-15 10:05:32.278814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.296 [2024-12-15 10:05:32.278827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.296 [2024-12-15 10:05:32.278832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:43.296 [2024-12-15 10:05:32.278838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.296 [2024-12-15 10:05:32.278846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.554 [2024-12-15 10:05:32.336603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.554 [2024-12-15 10:05:32.336637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:43.554 [2024-12-15 10:05:32.336646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.554 [2024-12-15 10:05:32.336653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.554 [2024-12-15 10:05:32.358430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.554 [2024-12-15 10:05:32.358562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:43.554 [2024-12-15 10:05:32.358574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.554 [2024-12-15 10:05:32.358585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:43.555 [2024-12-15 10:05:32.359539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:43.555 [2024-12-15 10:05:32.359595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:43.555 [2024-12-15 10:05:32.359687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:43.555 [2024-12-15 10:05:32.359728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:43.555 [2024-12-15 10:05:32.359778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.555 [2024-12-15 10:05:32.359822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:43.555 [2024-12-15 10:05:32.359828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.555 [2024-12-15 10:05:32.359833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.555 [2024-12-15 10:05:32.359923] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 173.336 ms, result 0 00:27:44.124 10:05:33 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:44.124 10:05:33 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:44.124 10:05:33 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:44.124 10:05:33 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:44.124 10:05:33 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:44.124 10:05:33 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:44.124 Remove shared memory files 00:27:44.124 10:05:33 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:44.124 10:05:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:44.124 10:05:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:44.124 10:05:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:44.124 10:05:33 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79214 00:27:44.124 10:05:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:44.124 10:05:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:44.124 ************************************ 00:27:44.124 END TEST ftl_upgrade_shutdown 00:27:44.124 ************************************ 00:27:44.124 00:27:44.124 real 1m24.936s 00:27:44.124 user 1m57.726s 00:27:44.124 sys 0m19.259s 00:27:44.124 10:05:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:44.124 10:05:33 -- common/autotest_common.sh@10 -- # set +x 00:27:44.124 10:05:33 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:44.124 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:44.124 10:05:33 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:44.124 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:44.124 10:05:33 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:44.124 10:05:33 -- ftl/ftl.sh@14 -- # killprocess 70509 00:27:44.124 Process with pid 70509 is not found 00:27:44.124 10:05:33 -- common/autotest_common.sh@936 -- # '[' -z 70509 ']' 00:27:44.124 10:05:33 -- common/autotest_common.sh@940 -- # kill -0 70509 00:27:44.124 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70509) - No such process 00:27:44.124 10:05:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70509 is not found' 00:27:44.124 10:05:33 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:44.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.124 10:05:33 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79651 00:27:44.124 10:05:33 -- ftl/ftl.sh@20 -- # waitforlisten 79651 00:27:44.124 10:05:33 -- common/autotest_common.sh@829 -- # '[' -z 79651 ']' 00:27:44.124 10:05:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.124 10:05:33 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:44.124 10:05:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.124 10:05:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.124 10:05:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.124 10:05:33 -- common/autotest_common.sh@10 -- # set +x 00:27:44.124 [2024-12-15 10:05:33.132027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.124 [2024-12-15 10:05:33.132314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79651 ] 00:27:44.383 [2024-12-15 10:05:33.282118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.641 [2024-12-15 10:05:33.418685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:44.641 [2024-12-15 10:05:33.418836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.212 10:05:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.212 10:05:33 -- common/autotest_common.sh@862 -- # return 0 00:27:45.212 10:05:33 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:45.212 nvme0n1 00:27:45.212 10:05:34 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:45.212 10:05:34 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:45.212 10:05:34 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:45.473 10:05:34 -- ftl/common.sh@28 -- # stores=c5a9dd39-b0fd-4779-a74b-a1f9643ccca7 00:27:45.473 10:05:34 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:45.473 10:05:34 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5a9dd39-b0fd-4779-a74b-a1f9643ccca7 00:27:45.733 10:05:34 -- ftl/ftl.sh@23 -- # killprocess 79651 00:27:45.733 10:05:34 -- common/autotest_common.sh@936 -- # '[' -z 79651 ']' 00:27:45.733 10:05:34 -- common/autotest_common.sh@940 -- # kill -0 79651 00:27:45.733 10:05:34 -- common/autotest_common.sh@941 -- # uname 00:27:45.733 10:05:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:45.733 10:05:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79651 00:27:45.733 killing process with pid 79651 00:27:45.733 10:05:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:45.733 10:05:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:45.733 10:05:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79651' 00:27:45.733 10:05:34 -- common/autotest_common.sh@955 -- # kill 79651 00:27:45.733 10:05:34 -- common/autotest_common.sh@960 -- # wait 79651 00:27:47.642 10:05:36 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:47.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:47.643 Waiting for block devices as requested 00:27:47.643 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.643 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.643 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.903 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.189 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:53.189 Remove shared memory files 00:27:53.189 10:05:41 -- ftl/ftl.sh@28 -- # remove_shm 00:27:53.189 10:05:41 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:53.189 10:05:41 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:53.189 10:05:41 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:53.189 10:05:41 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:53.189 10:05:41 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:53.189 10:05:41 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:53.190 ************************************ 00:27:53.190 END TEST ftl 00:27:53.190 ************************************ 00:27:53.190 00:27:53.190 real 13m16.306s 00:27:53.190 user 15m14.041s 00:27:53.190 sys 1m12.644s 00:27:53.190 10:05:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:53.190 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:27:53.190 10:05:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:53.190 10:05:41 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:53.190 10:05:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:53.190 10:05:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:53.190 10:05:41 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:53.190 10:05:41 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:53.190 10:05:41 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:53.190 10:05:41 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:53.190 10:05:41 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:53.190 10:05:41 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:53.190 10:05:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.190 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:27:53.190 10:05:41 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:53.190 10:05:41 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:53.190 10:05:41 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:53.190 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:27:54.573 INFO: APP EXITING 00:27:54.573 INFO: killing all VMs 00:27:54.573 INFO: killing vhost app 00:27:54.573 INFO: EXIT DONE 00:27:55.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.143 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:55.143 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:55.143 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:55.143 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:55.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.977 Cleaning 00:27:55.977 Removing: /var/run/dpdk/spdk0/config 00:27:55.977 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:55.977 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:55.977 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:55.977 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:55.977 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:55.977 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:55.977 Removing: /var/run/dpdk/spdk0 00:27:55.977 Removing: /var/run/dpdk/spdk_pid55979 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56180 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56479 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56579 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56674 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56786 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56871 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56916 00:27:55.977 Removing: /var/run/dpdk/spdk_pid56947 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57022 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57128 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57552 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57617 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57683 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57707 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57805 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57816 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57920 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57938 00:27:55.977 Removing: /var/run/dpdk/spdk_pid57991 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58009 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58062 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58080 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58231 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58274 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58362 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58421 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58452 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58519 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58545 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58586 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58612 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58653 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58680 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58726 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58752 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58793 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58819 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58860 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58886 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58923 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58949 00:27:55.977 Removing: /var/run/dpdk/spdk_pid58991 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59012 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59055 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59081 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59122 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59148 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59189 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59215 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59256 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59282 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59323 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59349 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59390 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59418 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59459 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59480 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59525 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59552 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59593 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59628 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59672 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59701 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59745 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59760 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59801 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59827 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59879 00:27:55.977 Removing: /var/run/dpdk/spdk_pid59958 00:27:55.977 Removing: /var/run/dpdk/spdk_pid60076 00:27:55.977 Removing: /var/run/dpdk/spdk_pid60253 00:27:55.977 Removing: /var/run/dpdk/spdk_pid60350 00:27:55.977 Removing: /var/run/dpdk/spdk_pid60391 00:27:55.977 Removing: /var/run/dpdk/spdk_pid60857 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61051 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61161 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61214 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61245 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61328 00:27:55.977 Removing: /var/run/dpdk/spdk_pid61991 00:27:55.977 Removing: /var/run/dpdk/spdk_pid62022 00:27:55.977 Removing: /var/run/dpdk/spdk_pid62493 00:27:56.240 Removing: /var/run/dpdk/spdk_pid62596 00:27:56.240 Removing: /var/run/dpdk/spdk_pid62711 00:27:56.240 Removing: /var/run/dpdk/spdk_pid62764 00:27:56.240 Removing: /var/run/dpdk/spdk_pid62784 00:27:56.240 Removing: /var/run/dpdk/spdk_pid62815 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64745 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64884 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64888 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64906 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64973 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64977 00:27:56.240 Removing: /var/run/dpdk/spdk_pid64989 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65057 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65061 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65073 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65128 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65132 00:27:56.240 Removing: /var/run/dpdk/spdk_pid65144 00:27:56.240 Removing: /var/run/dpdk/spdk_pid66578 00:27:56.240 Removing: /var/run/dpdk/spdk_pid66676 00:27:56.240 Removing: /var/run/dpdk/spdk_pid66804 00:27:56.240 Removing: /var/run/dpdk/spdk_pid66891 00:27:56.240 Removing: /var/run/dpdk/spdk_pid66967 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67043 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67142 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67222 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67364 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67744 00:27:56.240 Removing: /var/run/dpdk/spdk_pid67776 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68217 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68403 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68507 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68620 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68662 00:27:56.240 Removing: /var/run/dpdk/spdk_pid68693 00:27:56.240 Removing: /var/run/dpdk/spdk_pid69007 00:27:56.240 Removing: /var/run/dpdk/spdk_pid69069 00:27:56.240 Removing: /var/run/dpdk/spdk_pid69144 00:27:56.240 Removing: /var/run/dpdk/spdk_pid69535 00:27:56.240 Removing: /var/run/dpdk/spdk_pid69694 00:27:56.240 Removing: /var/run/dpdk/spdk_pid70509 00:27:56.240 Removing: /var/run/dpdk/spdk_pid70635 00:27:56.240 Removing: /var/run/dpdk/spdk_pid70838 00:27:56.240 Removing: /var/run/dpdk/spdk_pid70930 00:27:56.240 Removing: /var/run/dpdk/spdk_pid71266 00:27:56.240 Removing: /var/run/dpdk/spdk_pid71520 00:27:56.240 Removing: /var/run/dpdk/spdk_pid71880 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72097 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72241 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72289 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72489 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72525 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72584 00:27:56.240 Removing: /var/run/dpdk/spdk_pid72848 00:27:56.240 Removing: /var/run/dpdk/spdk_pid73102 00:27:56.240 Removing: /var/run/dpdk/spdk_pid73596 00:27:56.240 Removing: /var/run/dpdk/spdk_pid74407 00:27:56.240 Removing: /var/run/dpdk/spdk_pid75137 00:27:56.240 Removing: /var/run/dpdk/spdk_pid75992 00:27:56.240 Removing: /var/run/dpdk/spdk_pid76147 00:27:56.240 Removing: /var/run/dpdk/spdk_pid76230 00:27:56.240 Removing: /var/run/dpdk/spdk_pid76582 00:27:56.240 Removing: /var/run/dpdk/spdk_pid76646 00:27:56.240 Removing: /var/run/dpdk/spdk_pid77368 00:27:56.240 Removing: /var/run/dpdk/spdk_pid77838 00:27:56.240 Removing: /var/run/dpdk/spdk_pid78625 00:27:56.240 Removing: /var/run/dpdk/spdk_pid78767 00:27:56.240 Removing: /var/run/dpdk/spdk_pid78818 00:27:56.240 Removing: /var/run/dpdk/spdk_pid78878 00:27:56.240 Removing: /var/run/dpdk/spdk_pid78936 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79000 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79214 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79269 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79342 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79403 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79442 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79504 00:27:56.240 Removing: /var/run/dpdk/spdk_pid79651 00:27:56.240 Clean 00:27:56.502 killing process with pid 48164 00:27:56.502 killing process with pid 48170 00:27:56.502 10:05:45 -- common/autotest_common.sh@1446 -- # return 0 00:27:56.502 10:05:45 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:56.502 10:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.502 10:05:45 -- common/autotest_common.sh@10 -- # set +x 00:27:56.502 10:05:45 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:56.502 10:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.502 10:05:45 -- common/autotest_common.sh@10 -- # set +x 00:27:56.502 10:05:45 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:56.502 10:05:45 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:56.502 10:05:45 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:56.502 10:05:45 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:56.502 10:05:45 -- spdk/autotest.sh@383 -- # hostname 00:27:56.502 10:05:45 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:56.763 geninfo: WARNING: invalid characters removed from testname! 00:28:23.349 10:06:08 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.349 10:06:11 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:24.734 10:06:13 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:26.648 10:06:15 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:29.248 10:06:18 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.796 10:06:20 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:33.708 10:06:22 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:33.708 10:06:22 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:33.708 10:06:22 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:33.708 10:06:22 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:33.969 10:06:22 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:33.969 10:06:22 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:33.969 10:06:22 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:33.969 10:06:22 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:33.969 10:06:22 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:33.969 10:06:22 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:33.969 10:06:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:33.969 10:06:22 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:33.969 10:06:22 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:33.969 10:06:22 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:33.969 10:06:22 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:33.969 10:06:22 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:33.969 10:06:22 -- scripts/common.sh@343 -- $ case "$op" in 00:28:33.969 10:06:22 -- scripts/common.sh@344 -- $ : 1 00:28:33.969 10:06:22 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:33.969 10:06:22 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.969 10:06:22 -- scripts/common.sh@364 -- $ decimal 1 00:28:33.969 10:06:22 -- scripts/common.sh@352 -- $ local d=1 00:28:33.969 10:06:22 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:33.969 10:06:22 -- scripts/common.sh@354 -- $ echo 1 00:28:33.969 10:06:22 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:33.969 10:06:22 -- scripts/common.sh@365 -- $ decimal 2 00:28:33.969 10:06:22 -- scripts/common.sh@352 -- $ local d=2 00:28:33.969 10:06:22 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:33.969 10:06:22 -- scripts/common.sh@354 -- $ echo 2 00:28:33.969 10:06:22 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:33.969 10:06:22 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:33.969 10:06:22 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:33.969 10:06:22 -- scripts/common.sh@367 -- $ return 0 00:28:33.969 10:06:22 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.969 10:06:22 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.969 --rc genhtml_branch_coverage=1 00:28:33.969 --rc genhtml_function_coverage=1 00:28:33.969 --rc genhtml_legend=1 00:28:33.969 --rc geninfo_all_blocks=1 00:28:33.969 --rc geninfo_unexecuted_blocks=1 00:28:33.969 00:28:33.969 ' 00:28:33.969 10:06:22 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.969 --rc genhtml_branch_coverage=1 00:28:33.969 --rc genhtml_function_coverage=1 00:28:33.969 --rc genhtml_legend=1 00:28:33.969 --rc geninfo_all_blocks=1 00:28:33.969 --rc geninfo_unexecuted_blocks=1 00:28:33.969 00:28:33.969 ' 00:28:33.969 10:06:22 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.969 --rc genhtml_branch_coverage=1 00:28:33.969 --rc genhtml_function_coverage=1 00:28:33.969 --rc genhtml_legend=1 00:28:33.969 --rc geninfo_all_blocks=1 00:28:33.969 --rc geninfo_unexecuted_blocks=1 00:28:33.969 00:28:33.969 ' 00:28:33.969 10:06:22 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.969 --rc genhtml_branch_coverage=1 00:28:33.969 --rc genhtml_function_coverage=1 00:28:33.969 --rc genhtml_legend=1 00:28:33.969 --rc geninfo_all_blocks=1 00:28:33.969 --rc geninfo_unexecuted_blocks=1 00:28:33.969 00:28:33.969 ' 00:28:33.969 10:06:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.969 10:06:22 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:33.969 10:06:22 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.969 10:06:22 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.969 10:06:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.969 10:06:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.969 10:06:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.969 10:06:22 -- paths/export.sh@5 -- $ export PATH 00:28:33.969 10:06:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.969 10:06:22 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:33.969 10:06:22 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:33.969 10:06:22 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734257182.XXXXXX 00:28:33.969 10:06:22 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734257182.1vETlq 00:28:33.969 10:06:22 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:33.969 10:06:22 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:33.969 10:06:22 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:33.969 10:06:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:33.969 10:06:22 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:33.969 10:06:22 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:33.969 10:06:22 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:33.969 10:06:22 -- common/autotest_common.sh@10 -- $ set +x 00:28:33.969 10:06:22 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:33.969 10:06:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:33.969 10:06:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:33.970 10:06:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:33.970 10:06:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:33.970 10:06:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:33.970 10:06:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:33.970 10:06:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:33.970 10:06:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:33.970 10:06:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:33.970 10:06:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:33.970 + [[ -n 4990 ]] 00:28:33.970 + sudo kill 4990 00:28:33.980 [Pipeline] } 00:28:33.997 [Pipeline] // timeout 00:28:34.002 [Pipeline] } 00:28:34.017 [Pipeline] // stage 00:28:34.022 [Pipeline] } 00:28:34.037 [Pipeline] // catchError 00:28:34.047 [Pipeline] stage 00:28:34.049 [Pipeline] { (Stop VM) 00:28:34.061 [Pipeline] sh 00:28:34.348 + vagrant halt 00:28:36.892 ==> default: Halting domain... 00:28:43.492 [Pipeline] sh 00:28:43.775 + vagrant destroy -f 00:28:46.319 ==> default: Removing domain... 00:28:46.904 [Pipeline] sh 00:28:47.189 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:47.199 [Pipeline] } 00:28:47.213 [Pipeline] // stage 00:28:47.218 [Pipeline] } 00:28:47.231 [Pipeline] // dir 00:28:47.236 [Pipeline] } 00:28:47.250 [Pipeline] // wrap 00:28:47.255 [Pipeline] } 00:28:47.267 [Pipeline] // catchError 00:28:47.275 [Pipeline] stage 00:28:47.278 [Pipeline] { (Epilogue) 00:28:47.290 [Pipeline] sh 00:28:47.575 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:52.859 [Pipeline] catchError 00:28:52.861 [Pipeline] { 00:28:52.874 [Pipeline] sh 00:28:53.160 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:53.160 Artifacts sizes are good 00:28:53.171 [Pipeline] } 00:28:53.184 [Pipeline] // catchError 00:28:53.194 [Pipeline] archiveArtifacts 00:28:53.238 Archiving artifacts 00:28:53.366 [Pipeline] cleanWs 00:28:53.388 [WS-CLEANUP] Deleting project workspace... 00:28:53.388 [WS-CLEANUP] Deferred wipeout is used... 00:28:53.415 [WS-CLEANUP] done 00:28:53.417 [Pipeline] } 00:28:53.433 [Pipeline] // stage 00:28:53.439 [Pipeline] } 00:28:53.453 [Pipeline] // node 00:28:53.458 [Pipeline] End of Pipeline 00:28:53.520 Finished: SUCCESS